Tuesday, August 30, 2016

Art and Science - Hyperbolic Blanket

A mathematician friend of mine once told me that "math is the study of interesting definitions." Mathematicians, like them, study abstract systems built up from starting assumptions, known as axioms, and attempt to discover new features of these systems through the logic of formal proofs. The choice of foundational axioms is incredibly important to the resulting systems, as a slight change to one of these building blocks can create an entirely new world full of rich structure. People who do math make their choice of axioms by considering what new, useful, or interesting structures arise from those axioms.

Non-Euclidean geometry perfectly encapsulates this exciting feature of mathematics. Last week I had the pleasure of attending a lecture by Dr. Evelyn Lamb about hyperbolic geometry--in particular, visualizing its counterintuitive properties. Hyperbolic geometry is a type of Non-Euclidean geometry that was discovered by changing one of the five fundamental axioms chosen by Euclid over two thousand years ago. The axiom in question, called the parallel postulate, states that, given a line and a point off that line, there is a single, unique line that passes through that point and never intersects the first line, no matter how far the two lines are extended. Change this assumption, and the shape of your space is dramatically altered. 

Euclidean geometry is the geometry of the flat plane, the geometry I was taught in middle school, the geometry you can draw on a piece of paper. The consequences of choosing Euclid's version of the parallel postulate have been explored for thousands of years. From it we get results like the Pythagorean Theorem and the theorem that the interior angles of a triangle add up to 180 degrees, among others. 

So what happens when we change the parallel postulate? Let's rephrase it: given a line and a point off that line, there is NO parallel line that passes through that point that will never intersect the first line. When extended, all lines eventually intersect. This causes the space to close up on itself and gives us spherical geometry, which is familiar to us when we look at a globe. We live on the two-dimensional surface of a sphere, and this axiom system describes it. In this geometry, we get counterintuitive results. For instance, interior angles of a triangle add up to more than 180 degrees.

We can change the parallel postulate again. Here's the third version:  given a line and a point off that line, there an infinite number of lines that passes through that point and they never intersect the first line, no matter how far the lines are extended. This is hyperbolic geometry, an even stranger structure that is difficult to describe without the language of mathematics. It is exponentially expansive, and just like a sphere, it is impossible to draw in a Euclidean plane without distortions. This axiom system gives us triangles with interior angle sums that are less that 180 degrees. In fact, in hyperbolic geometry, it is possible to have triangles that have interior angle sums of zero!

Dr. Lamb gave us a few techniques for conceptualizing the hyperbolic plane in the talk, from crochet to 3D printed models. But what captured my imagination most was a pattern for a hyperbolic blanket. So I decided to make it.

How can you translate hyperbolic geometry into the surface of a physical blanket? Using tessellation, or tiling. Just as equilateral hexagons, squares, and triangles can tile the Euclidean plane, other shapes can tile hyperbolic and spherical surfaces. 

In the Euclidean plane, we have 360 degrees of space around every point. A tiling of equilateral triangles demonstrates this. At each point, six equilateral triangles meet, each with all interior angles equal to 60 degrees. Multiplying 6 by 60 yields 360, as expected. The picture below demonstrates this tiling, made from toy magnets.



In spherical geometry, there are less the 360 degrees of space around each point. This allows the surface to close into a bounded spherical surface. The tiling below shows five equilateral triangles meeting at each point, each with all interior angles equal to 60 degrees (although in the flat picture, they don't look equilateral due to distortion, you'll have to trust me). Multiplying 5 by 60 gives us 300, much less than the expected Euclidean 360. See below.



Finally, in hyperbolic geometry, there are more that 360 degrees of space around each point. This makes the surface expansive and very floppy. The tiling below shows seven equilateral triangles meeting at each point, each with all interior angles equal to 60 degrees (again, they don't look equilateral due to the pesky distortion). Multiplying 7 by 60 gives us 420, more than the expected Euclidean 360. Here's a picture.



The blanket pattern I used uses a tiling with four pentagons clustered around each point. The interior angles of a pentagon are 108 degrees, so fitting them four-to-a-point gives us 432 degrees to accommodate. The only surface that can do this is hyperbolic. The cozy result is shown below.



There is so much more to explore when it comes to hyperbolic geometry, and much of it is available online. For more art featuring hyperbolic tilings, look to the works of M. C. Escher. For some mathematics-based intuition, I recommend this series of Numberphile videos that explore what it would like to live on a hyperbolic surface.

Tuesday, July 5, 2016

From Photons to Photos: Observational Astronomy with CCDs

Summer research season is underway for me! This year, I am working with Mike Brown on an observational astronomy project. I’ll be using some of the data collected on the Keck Telescope last winter to study Jupiter’s Trojan asteroids.

Observational astronomy is probably what most people imagine when they picture an astronomer’s work, although the exact image in mind might be a bit outdated. Today’s astronomers are more likely sitting behind a computer screen than behind the eyepiece of a telescope, even in work that isn’t strictly computational or theoretical. That’s because astronomy, like many aspects of modern life, has gone digital.

Astronomers first recorded their observations on paper, by hand, until the invention of photography. By the early twentieth century, ground breaking discoveries, such as Edwin Hubble’s discovery of other galaxies and the expansion of the universe, were being made with the assistance of photographic plates. As photography evolved, so did astronomy. Today, digital cameras use sensors called CCDs to capture images, as do most telescopes. Now, astronomers can affix sensors to the foci of their telescopes in order to collect high-quality data.

How do CCDs capture astronomical images, assuming you have a telescope and a target for your observations? CCD stands for charge coupled device, a name that gives a hint of how it works. A CCD is sectioned into pixels, or bins, and when exposed to light, the bins gain a charge. The charge on each bin is proportional to the amount of light that it received, so the more charge each bin has, the more light it was exposed to, the brighter the area of the sky it observed. When the exposure is finished, the charge on each bin is read out and converted into a number (count) that represents how much charge built up in each bin. This transforms the image into an array of counts that represents how much light was detected in each pixel of the CCD.

Arrays, simple lists of numbers, are very easy for computers to store, transfer, and manipulate, so they are a useful format for astronomical data. Conversion of images to numbers just isn’t possible with sketches and photographic plates, and opens up new possibilities for handling data, since computers can very easily handle manipulating lists of that form. Some astronomers today work on “training” computers to perform automatic analyses of arrays, so computers can quickly accomplish basic tasks identifying variable stars or Kuiper Belt Objects. Such computer programs are useful, especially with the rise of large scale digital sky surveys that produce enormous quantities of data on a nightly basis.


A small part of a typical array might look like this. While useful to a computer, it’s very difficult for a human brain to figure out what’s going on without some help. In order to understand what’s going on, we can rearrange the numbers a bit. To make things even clearer, we can map count numbers to colors. I’ll pick greyscale, so that we can keep in mind that more counts corresponds to more light.



Unfortunately, CCDs, as powerful and useful as they are, do introduce their own biases into the data, so our image doesn’t look very clean right now. This problem is easy to correct, as long as you are prepared to encounter it. The CCD-introduced bias can be fixed by taking two specific types of pictures, known as darks and flats, which act like a control in a scientific experiment.

The first type of control picture, the dark, is necessary due to thermal noise in the CCD chip. Thermal noise is caused by the heat radiation from the sensor itself, since CCDs are sensitive to infrared light (heat). CCDs in telescopes are often cooled to low temperatures to reduce the effect of this noise, which is present as long as the instrument has a temperature, so it cannot be eliminated completely. To combat this problem, astronomers prepare a dark, which is an exposure of the CCD to a completely lightless environment, a bit like taking a picture with a camera that still has the lens cap attached. This way, the CCD is only exposed to the thermal noise originating from the instrument itself. Here is what a dark might look like:



The second type of image, the flat, is an image taken of a flat field of uniform light. This could be an evenly illuminated surface. Many astronomers will take flats during sunset when the evening sky is bright enough to wash out the background stars, but not so bright that the sensors are overloaded. Since we know the image should be evenly lit, the flat field allows astronomers to pick up systematic defects in the CCD. Due to tiny imperfections during manufacturing, some pixels may be more or less sensitive than average, or the telescope itself might have lens imperfections that concentrate light in different areas of the image. Flat field images let astronomers discover and correct for these effects. A typical flat might look like:



Now that we have our image, dark, and flat field, we can begin to process the data. First, we subtract the dark from the image of the object:



And from that image, we subtract the flat field, giving a nice, clear picture of our target object:



Now that we’ve done initial processing of the image to correct for bias, we can start to do more interesting analyses of the data. One very basic thing we can do is use this image to figure out how bright the object we are looking at is. We can sum up the counts that belong to the object to get a total brightness. In this case, the sum of the object counts is 550. But this number doesn’t mean very much on its own. The object might actually be quite dim, but appears bright because a long exposure was taken, and the CCD had more time to collect light. Or, we could have taken a very short exposure of a very bright object. So, we need to find a reference star of known brightness in our image, and measure that. If we know how bright the object appears compared to the reference star in our images, and we know how bright the reference star is, we can infer the brightness of the object.

If we have taken a picture of the same object in different filters, we can also create false-color images. Filters can be placed in the telescope aperture in order to restrict which wavelengths can pass through the telescope.  Using filters allows astronomers to choose which colors of light will reach the CCD and be counted. To make a false color image, astronomers combine images from two or more different filters. Each separate image is assigned a color according to which filter it was taken in (perhaps blue for ultraviolet light, green for visible light, and red for infrared light), then the images are combined into one.



False color images are useful because the color coding for each filter help draw attention to important differences between the individual images while still allowing astronomers to see the structure of the object in many different filters at once. In planetary science, for instance, different colors in an image might reflect differences in the composition of the surface of a planet, revealing regions of strikingly different geological histories across the whole planet. Images can also be combined to produce “true color” images using filters for different wavelengths of visible light in order to produce pictures that mimic closely what different astronomical objects would look like to human eyes. CCD technology has brought astronomy down to earth, quite literally, by producing images that reveal what the cosmos would look like, if only we could see it as well as our telescopes.

Wednesday, June 22, 2016

How Do You Discover A Planet?

Out in the furthest reaches of our Solar System, twenty times further from the Sun than Neptune, a massive unknown planet may be lurking. At ten times the mass of the Earth, the gravitational pull of the massive planet has herded the orbits of an obscure group of icy objects into a strange alignment. Last January, Caltech scientists Mike Brown and Konstantin Batygin announced the likely existence of this new planet, dubbed Planet Nine, in a paper published in the Astronomical Journal. Though the mysterious planet has yet to be seen through a telescope, evidence of its existence has been growing for years. The next major Solar System discovery is upon us. When astronomers finally capture an image of Planet Nine, it will mark the only discovery of a new planet in our Solar System in living memory. But why do we think Planet Nine is really there?

***

Other than Earth, the first six planets known to humankind were discovered simply by looking up. The closest five, Mercury, Venus, Mars, Jupiter, and Saturn were visible to ancient humans with the unaided eye for thousands of years. Our ancestors, reading the skies night after night, noticed them as bright lights that moved from night to night against the backdrop of fixed stars and named them planets, from the Greek word for wanderer. At this time, the Sun and Moon were also considered planets, for they too wandered among the stars, but as the centuries passed, we came to better understand our place in the universe. In 1543, Copernicus published his heliocentric theory, correctly positioning the planets as our celestial siblings orbiting the Sun just as the Earth does. 

The next big discovery in the solar system came in 1781, when Englishman William Hershel observed Uranus, a planet invisible to the unaided eye, with a massive homemade telescope. He meticulously studied the new planet, night after night, until he realized that it too wandered through the starry skies. Though Uranus had been spotted by previous generations of astronomers, because of its great distance from the sun, the planet’s wandering motion had never been observed, so Uranus was simply assumed to be a star. Even Hershel was skeptical: he initially believed the seventh planet was a comet—after all, no new planets had ever been documented in recorded history—but careful study of its orbit allowed astronomers to conclude Uranus was a planet in its own right.

Neptune and Planet Nine are different. Just as you can infer the presence of a breeze on a windy day by observing tossing trees and dancing leaves outside your window, scientists first detected Neptune and Planet Nine not through direct observation, but through their effects on objects that can be seen. In the 1840s, English and French astronomers predicted Neptune’s location after observing anomalous deviations in the orbit of Uranus. Again, the astronomy community was skeptical at first. But, using mathematics and the laws of physics, the planet’s location in the sky was pinpointed, and telescopes appropriately aligned to the predicted spot quickly sighted the eighth planet, just as promised.

Humanity’s exploration of the outer reaches of the Solar System didn’t end with Neptune. In the last century, astronomers began probing a region of space called the Kuiper Belt, a swath of tiny icy objects just beyond the orbit of Neptune. The giant blue planet dominates the Kuiper Belt gravitationally, shaping the orbits of the nearby Kuiper Belt Objects. For instance, the Kuiper Belt’s most famous resident, Pluto, is locked into its orbit by a gravitational relationship with Neptune known as resonance. For every two orbits Pluto makes, Neptune makes three. This synchronization regulates Pluto’s motion along its orbit, just as a parent regulates the motion of a child on a playground swing by pushing in harmony with the swing.

While looking for Kuiper Belt Objects, astronomers made a peculiar discovery—Sedna, a strange, icy body about half the size of Pluto. Sedna orbits the sun in a highly elliptical path that takes it from twice to thirty times as far from the Sun as Neptune. At these distances, Sedna is much too far away to be a member of the Kuiper Belt, floating peacefully billions of miles away from Neptune’s region of influence. Usually, the smaller members of the Solar System start out with circular orbits and, over time, find themselves on extremely elliptical orbits after close encounters with massive planets. Like a slingshot, big planets inject energy into the orbits of small bodies during close encounters and send them rocketing into strange new orbits or even out of the Solar System entirely. But no known object could have explained how Sedna came to have such a strange orbit, since it never came close to any known planets. So, for years, astronomers assumed its existence spoke to a freak event like a close gravitational encounter with a passing star—a one in a billion anomaly.

Yet similar objects kept being discovered. In astronomer’s parlance, these new Sedna-like objects all had high perihelia and high major axes. That is, their closest approaches (perihelia) to the Sun were well beyond the orbit of Neptune, and the long (major) axes of their oval orbits were many times larger than their shorter (minor) axes. In other words, the objects had highly elliptical orbits, and they never got very close to Neptune or the Sun. Most telling of all, the long axes of the orbits of all of these objects pointed roughly in the same direction, an eerie coincidence Mike Brown described as “like having six hands on a clock all moving at different rates, and when you happen to look up, they're all in exactly the same place.” Since the chances of such an alignment occurring by accident are low, at about 0.007 percent, the Caltech scientists suspected something was missing from current models of the solar system.

A figure describing various orbital parameters for the Sedna-like objects. The orbits of 2007 TG422, Sedna, and 2010 GB174 are displayed. Sedna’s orbit is in a slightly darker color and is labelled. The distance from the sun to the nearest part of Sedna’s orbit is labeled “Perihelion”. The length of the long axis is designated as the “Major Axis” and the length of the short axis is designates as the “Minor Axis”

Orbital Parameters: The orbits of Sedna and two of its sibling objects are shown above. Sedna’s important orbital parameters are labelled. The long axes of the orbits of the three objects point in roughly the same direction, a major clue to Batygin and Brown that our current models of the Solar System might be missing something.

Unlike the astronomers who deduced the existence of Neptune using meticulous calculations performed by hand, Brown and Batygin needed to use computing power to make their predictions. Neptune’s discovery involved predicting the location of an unknown planet based off of deviations in the orbit of just one object, namely Uranus, but Planet Nine’s presence needed to be inferred from a collection of objects with a complicated and chaotic gravitational dynamic. In complex systems like this one, it is much faster and easier to attempt to study the system using the brute force of computation. Brown and Batygin developed a simulation that used trial and error in order to deduce the placement of the hypothetical planet in the solar system. By figuring out which arrangements of planets didn’t produce the observed configuration of the solar system, the researchers could narrow down in which regions of the solar system the new planet might be located.

One run of the simulation might go like this: a possible orbit for Planet Nine is specified in detail, and placed into the known model of the solar system. The simulation begins, and the computer predicts the positions of all the planets, moons, and minor solar system bodies over the course of millions, or even billions, of years. When the simulation ends, the outcome is compared to the observable parameters of the solar system today, including the positions of the planets, the alignment of Sedna and its siblings. Sometimes, the end result is dramatically different from the solar system today. In the wrong alignment, an interaction with Planet Nine could fling one of the other eight planets from the solar system via the slingshot effect. Usually, the differences are more subtle. There might not be any Sedna-like objects, or they are not aligned anymore, or a subtle detail of their orbits doesn’t match the real state of Solar System. If there are any differences between the simulation’s prediction and the observed state of the solar system, the initial guess for Planet Nine’s orbit is ruled out.

Batygin and Brown first hypothesized that Planet Nine would be located on the same side of the Solar System as the anomalous Sedna-like objects. They reasoned those long major axes that puzzled astronomers ought to be pointed towards the location of the unknown planet, as if Planet Nine were shepherding Sedna and company into alignment. However, this initial guess didn’t fit the model perfectly, producing a different alignment than the one observed. Then, in a lucky guess, the researchers started the simulation with Planet Nine on the opposite side of the sun. In this new configuration, with the long axes of the objects’ orbits pointing away from Planet Nine, the output of the simulation perfectly matched what we see in the sky today.

All of the orbits of the Sedna-like objects are pictured, along with a possible orbit for Planet Nine (in a different color). The orbit for Planet Nine is the same as the correct answer (orbit 2) in the previous simulation. Importantly, the long axis of Planet Nine’s orbit is pointed in the opposite direction of the long axes of the Sedna-like objects.

A Possible Orbit for Planet Nine: Batygin and Brown’s model places Planet Nine on the opposite side of the sun from Sedna and its siblings. The long axes of the Sedna-like objects’ orbits point in the opposite direction as the long axis of Planet Nine’s orbit.

Placing Planet Nine on the opposite side of the Solar System from Sedna allows for a stable and accurate configuration of orbits. The model also had an unexpected consequence: it predicted a new class of objects astronomers should expect to see. This new group should contain small bodies with orbits perpendicular to the orbits of Sedna and its siblings. Finding these predicted objects would be a key test for the Planet Nine hypothesis. A scientific model can be useful for explaining observed phenomena, but it derives most of its power from its ability to correctly predict new aspects of a situation. Brown and Batygin scoured the catalogue of known minor members of the outer solar system, and, just as the model suggested, four objects with the predicted orbital properties were found.

*** 
             
Simulations can only take astronomers so far. Though Batygin and Brown have deduced by simulation thus far that Planet Nine is ten times heavier than Earth and completes its orbit in tens of thousands of years, the planet technically hasn’t been discovered, as no astronomer or telescope has spotted it. Direct detection is crucial in hunting for unknown planets in our Solar System. After the discovery of Neptune, astronomers sought to find more planets through similar indirect means. In the early twentieth century, supposed deviations were again found in the orbit of Uranus, leading to the prediction of Planet X. The search for that planet uncovered Pluto. But Pluto turned out to be too small to impact Uranus’s orbit in any way, and the deviations were later discovered to be observational errors. Direct imaging of Planet Nine should be feasible for a planet this close; compared to extrasolar planets orbiting other stars, Planet Nine is in our cosmic backyard. We should be able to spot it. So why haven’t we found it yet?

One possibility is that Planet Nine has already been spotted. Just as Uranus had been mistaken for a star before Herschel discovered its wandering motion in the sky, Planet Nine might be misclassified in some comprehensive sky survey as an obscure star. Planet Nine would be far enough away for this to be a possibility. Even with the most powerful telescopes, it takes 24 hours for Planet Nine to move noticeably far enough to rule out the possibility that the distant point of light is a fixed star. And there are further complications. A large swath of Planet Nine’s orbital path takes it through the plane of the Milky Way. If the planet is located in these unfortunate regions, astronomers will be searching for a faint, wandering dot against the backdrop of some of the most densely star-packed regions of the night sky. The busy background will make it difficult to spot the planet, like looking for a tiny ant crawling across a static-filled television screen.

Despite these challenges, we must continue the search for Planet Nine. While Brown and Batygin hope their prediction is correct, the only way to know for sure is to take to the skies, scanning the predicted orbital path and hoping for a glimpse of the elusive planet. If Planet Nine is really there—and Brown predicts we will know the answer within the year—we will recognize the planet the same way humanity has been recognizing planets for thousands of years. Somewhere up there, Planet Nine may be a faint, faraway dot, but it, just like the other planets in our Solar System, will be a wanderer amongst the silent, distant stars.  

***

Originally written for Caltech's science writing class, En/Wr 84.

Wednesday, June 1, 2016

It's Time to Demote Pluto... Again

Originally published in the California Tech, page 3.

In 2006, the International Astronomical Union (IAU) voted to reclassify Pluto, stripping it of its status as a planet while establishing a new class of celestial bodies: dwarf planets. While it makes sound scientific sense to demote Pluto from its planetary status, I argue that we should go even further by destroying the designation dwarf planet — a category which is both scientifically useless and pedagogically confusing. Instead, we should call Pluto what it really is: a Kuiper Belt Object.

So why, according to the IAU, isn’t Pluto a planet? The IAU has established three criteria for evaluating whether or not an object is a planet or dwarf planet. According to resolution B51, a planet is “a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to … [assume] a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood [sic] around its orbit.” The first criterion is fairly straightforward, separating the notion of planet from the notion of a natural satellite or moon. The second is a statement about the size of an object — a planet must be big enough for gravity to be the dominant force sculpting its shape. And the third is, well, confusing. Unfortunately, the third criterion is critical. It is the qualification Pluto fails and it establishes the difference between a planet and a dwarf planet. Planets have “cleared their neighborhood” and dwarf planets haven’t.

What the “clearing its neighborhood” criterion comes down to is gravitational influence. Planets, especially large ones like Jupiter, have gravitationally dominated their orbits. Anything that passes too close to Jupiter will either crash into the planet or be ejected from the solar system. In this way, Jupiter clears its neighborhood. The same process works for the other seven planets as well. However, Pluto exists in a belt of similar objects — the Kuiper Belt — where its diminutive size is enough to make it round, but not enough to clear its orbital path.

Because this process is inherently gravitational, it means that the IAU criteria establish two separate size thresholds, both of which must be passed in order to be a planet. Dwarf planets, only passing the “roundness” criterion, exist in a sort of in-between size category, a poor consolation prize to satiate angry Plutophiles. The dwarf planet distinction fails on two counts: it groups together objects with very little in common (other than roundness) and it fails to group together objects that share important physical properties and histories.

There are five objects in the solar system that qualify as dwarf planets: problematic Pluto, our own Mike Brown’s Eris, Ceres (the largest object in the asteroid belt) and the two obscure additions of Haumea and Makemake. Pluto, Eris, Haumea and Makemake are all residents of the Kuiper Belt, which lies beyond Neptune’s orbit, while Ceres is located much closer to the sun as the largest resident of the asteroid belt, which is between the orbits of Mars and Jupiter. While the Kuiper Belt Objects (KBOs) in this group have much in common with each other, they are vastly different from Ceres, both in composition and history. Ceres is primarily rocky; the KBOs are icy. Ceres, as a member of the asteroid belt, has primarily been influenced by Jupiter, while the KBOs’ histories are heavily shaped by the influence of Neptune.

Lumping Ceres together with these other objects has real consequences: it leads to the impression that Ceres is located in a completely different region of the solar system. As an astronomy outreach educator, I have encountered several aspiring amateur astronomers who mistakenly believed Ceres orbited beyond Neptune. The category dwarf planet, then, is misleading, a term that obscures truth.

A categorization that makes more sense is to group Ceres with objects that share its composition and history — the asteroids. Ceres may be an exceptionally large member of the asteroid belt, but this does not warrant the distinction of “dwarf planet.” Similarly, Pluto, Eris and their lesser known cousins should be classified alongside the rest of the Kuiper Belt Objects, with which they have more in common than with outlier Ceres.

Superstar astrophysicist Neil deGrasse Tyson advocates for a similar zone-like division of the solar system2, separating asteroids and Kuiper Belt Objects. He even goes as far as splitting the planets into two categories — inner planets and outer planets — a division that again reflects the shared composition and history of the rocky planets and gas giants. He implemented this categorization in his design for the solar system exhibit in the Hayden Planetarium before the discovery of Eris, before Pluto’s demotion was even up for discussion.

But this division makes pedagogical sense. Simply memorizing a list of planets is not instructive and leaves learners with a rigid and inflexible understanding of science, as the backlash against Pluto’s reclassification demonstrates. Teaching the solar system as a collection of different classes of objects opens up a more flexible understanding of science and leads naturally to scientifically relevant questions. Why, for instance, is it that the inner planets and outer planets are separated by belt of asteroids? Why are inner planets rocky and outer planets gassy? Such inquiries are more reflective of the true nature of science. Science is not simply a list of facts, but a systematic way of asking questions and organizing knowledge. Shouldn’t we, as scientists, strive for terminology that accurately reflects the exciting, ever-changing processes by which we discover it in the first place?

***

I originally wrote this piece for En/Wr 84, Caltech's science writing course. I plan on posting more material from that class soon.

1 You can read about the IAU definition here.
2 See Tyson's The Pluto Files

Sunday, January 24, 2016

Blinky-Blinky: How to Discover Kuiper Belt Objects

The Remote Observing Facility doesn't look special from the outside. It's one of many featureless white doors on the maze-like first floor of Caltech's Cahill Center for Astronomy and Astrophysics. Tonight, though, it's the only door propped open, and from down the hallway, you can hear the voices of its occupants, busy setting up for the next twelve hours of work. When work begins for the night, it's 8 pm, Pacific Time--that's 6 pm, in Hawai'i, where the Keck Telescope is located.

Inside the windowless room, there are three digital clocks. The first gives the current Greenwich Mean Time, a number that is dutifully recorded at the beginning of each exposure of the telescope. Another gives the time in Hawai'i, keeping track of sunset, sunrise, and twilight on the distant island. Finally, the clock in the middle keeps track of the local time in Pasadena, the only real link to the rhythms of daily life outside of the strange limbo-like atmosphere of the office.

The first step is to turn on and calibrate the instruments and run through a series of checklists for the telescope. Under the row of clocks, a webcam whirs to life, and three panels on the monitor below it blink on. The first is dark--later, when observing starts, it connects us to the telescope operator. She sits at the summit of Mauna Kea, moves the telescope into position, and sets up guidance and tracking so that the telescope stays pointed in a fixed direction as the Earth rotates beneath it. The second shows the telescope technician, located at the base of the mountain and acts as IT for the night. The last one is an image of me and the other occupants of the ROF.

Observing officially begins at the end of astronomical twilight. The targets are potential Kuiper Belt Objects1, identified by another telescope on a previous night and picked out by a computer as likely candidates. The idea behind these detections is what researcher Mike Brown calls "blinky-blinky," observe a patch of sky at two different times, and see if anything has moved. Look a third time, just to make sure the apparent movement wasn't caused by random noise in the instrument. If the object is seen in three different places, along a straight line, there's a good chance what you've found is real.

This is the same method Clyde Tombaugh used to discover Pluto, only he did it by literally blinking between two images. Today, Pluto-killer Mike Brown has autonomous programs to do it for him. For any given candidate object, the computer even spits out potential distances and orbits. Intuitively, this makes sense: objects that are farther away appear to move more slowly across the sky per unit time.

Looking at follow up targets several months later allows for a confirmation of candidates' existence, as well as a narrowing down of orbital properties. Observations fall into a particular routine. Every two minutes, we move onto a new target. We press a button to expose the telescope--it's essentially the same process as taking a long exposure image with a digital camera--and note the time. Two minutes later, we reposition the telescope, get an "okay"from the summit, and expose again. Every so often, the telescope gets re-aligned with a guide star of known position, and observations resume. Research continues like this for the next eight hours. Once we get two exposures of the same object, we can do our own makeshift "blinky-blinky" to get a first look at the data as it arrives.

Luckily for me, finals week means I leave the ROF early after only half a night of observing. Mike Brown2 and the rest of his team stay up until dawn, searching the clear Hawai'ian skies for distant worlds.

1 Kuiper Belt Objects are icy bodies that orbit near Neptune and beyond. When you think about Kuiper Belt Objects, think about Pluto-like objects.
2 Mike Brown also does some neat research on Europa.

Tuesday, November 10, 2015

Ancient Lakes and Lava Flows

Geology can tell some pretty amazing stories. 


Last month, I visited the Mojave Desert, near Barstow, on a field trip for my geology class this term. We mapped a section of Rainbow Basin, pictured above. This area, which is now in the high desert, used to be the site of a lake! How do I know this? 

The first clue is the rocks. Although the outcrop shows they have experienced a massive folding event, these rocks were originally deposited in horizontal layers. They are sedimentary rocks--made up of smaller rock particles that were compacted together over time. Using a hand lens to look at the individual grains, they are revealed to be very tiny clay particles. Because clay is so light, even the slightest of river currents can pick it up and transport it. The fact that this clay was deposited and later became a rock indicates that these rocks formed in a very low-energy environment. That suggests the rocks formed in the bottom of a large body of water.



Looking at the surfaces of exposed layers shows something telling--ripple marks! They can be found throughout the formation. You can imagine how these ripple marks might have formed at the edge of the water where waves lapped against the shore. Closer examination can even reveal the direction of the currents. Symmetrical ripples suggest wave action. Asymmetrical ripples suggest unidirectional flow. I found both types.



Here and there in the rock record, lake sediments are interspersed with a volcanic material called tuff. Tuff is volcanic ash that has been fused together into a rock. Because the ash is so light, wind can carry tuff very far, so it is not suspected that there were any volcanoes in the vicinity of the lake. But the tuff serves as a "marker bed" that differentiated between different layers of rock, and can provide a time estimate for the age of the rocks above and below it. 



In one of these marker beds, I found inverted mud cracks. They look like casts of ordinary mud cracks, the kind you might find in a dried out pond today. I can imagine clearly what must have happened. The lake must have gone through a dry spell, and as water evaporated, cracks formed in the newly exposed sediment. Then, a thick layer volcanic ash was deposited, and filled in the cracks, forming a new layer in the formation. The inverted cracks still give a sense of orientation to the landscape--they tell us where the surface used to be--even though the original mud cracks are long gone. 



Next, we visited Pisgah Crater in the Lavic Lake volcanic field. Looking around at the landscape, it is easy to see this area is a very different environment from Rainbow Basin. 


For one thing, the rocks are very dark! This is because they are basaltic, similar in composition to fresh lava seen in Hawai'i today. The lava has a ropy texture, called pahoehoe, a word that comes from Hawaiian. It forms when an insulating crust of lava cools on the top of the rest of the flow, but continues to be pushed along by the motion below. This insulation actually allows the lava to stay hot underneath the crust and allows lava to continue to flow for much longer than it would have otherwise. It also creates lava tubes that are fantastic to explore. 



Standing at the peak, you can easily see where lava flows have covered the desert valley below. The dark basaltic rocks provide a heavy contrast to the bright desert rocks. 


Pisgah is a cinder cone volcano. It's relatively young, so it has no connection to the tuff layers in Rainbow Basin. Although we don't normally think of Southern California as a major site of volcanoes, the famous San Andreas fault undergoes extensional motion in this region. The motion creates a rift zone where volcanoes formed by upwelling magma can dot the surface.

Friday, October 9, 2015

Exploring the Cuboctahedron

I built a math toy!



This particular object is in the shape of a cuboctahedron--an Archimedian solid that is particularly fun to play with. With twenty-four bits of straw and a long piece of string, you can build one of your own and morph it into various shapes yourself. The task of building a cuboctahedron incidentally involves learning a bit of geometry and graph theory, along the way.

The cuboctahedron is an Archimedian solid, and knowing what these solids are can actually help you build one. If you look at the corners (called vertices--in orange and yellow below) of a cuboctahedron, you'll notice two square and two triangular faces meet at each vertex. Additionally, the triangles and squares alternate such that two square (or triangular) faces are always opposite each other on the vertex. No two of the same shapes share a side. This pattern is the same at every vertex, as is true for any Archimedian solid. If you build one vertex with this pattern, then continue it with each new vertex you add, you will eventually complete the cuboctahedron correctly. I enjoy making cuboctohedrons and other geometric objects in this algorithmic way, because instead of comparing the model in my hand to a reference on paper, I can use logic to reason what my next steps should be and avoid a lot of confusion that results from comparing a three dimensional object with its two dimensional representation.



The cuboctahedron I made uses straws for its edges. The cuboctahedron has twenty-four edges, so if you want to build it, you'll need twenty-four identical straw pieces and a string at least as long as all of the straw pieces combined. The string only needs to be in one continuous piece--it's possible to wrap the string through each straw, passing through each straw segment once and only once, with the end of the string finishing at the same place it started. This way, you can knot the two ends of the string together and have a flexible cuboctahedron with minimal knotting.

The type of string path we want around the edges of the cuboctahedron, a path that starts and ends in the same place after passing through each edge only once, is called an Eulerian circuit. It's only possible to complete on the edges of solids if an even number of edges meet at each vertex. It's easy to see why--in order to have a complete cycle, at every vertex, the path of the circuit must both enter, and then leave that vertex. Since each vertex can only be traversed once, each "entering" path must be paired with an "exiting" path. If you don't believe me, you can try building solids that don't satisfy this property with only one single piece of string. Mathematics guarantees you will inevitably fail, though you can still build these solids with the less nice more-than-one-string-required property if you want.



Here is the Eulerian circuit I used in my cuboctahedron. There are other ways of "lacing up" the toy, too! At each vertex, I wrapped the string around itself a few times so that all four edges would hang together nicely. This also let me space out the ends of the straws, and gave the toy a little bit more flexibility (a bit of engineering and experimentation helps).