Showing posts with label astronomy. Show all posts
Showing posts with label astronomy. Show all posts

Tuesday, July 5, 2016

From Photons to Photos: Observational Astronomy with CCDs

Summer research season is underway for me! This year, I am working with Mike Brown on an observational astronomy project. I’ll be using some of the data collected on the Keck Telescope last winter to study Jupiter’s Trojan asteroids.

Observational astronomy is probably what most people imagine when they picture an astronomer’s work, although the exact image in mind might be a bit outdated. Today’s astronomers are more likely sitting behind a computer screen than behind the eyepiece of a telescope, even in work that isn’t strictly computational or theoretical. That’s because astronomy, like many aspects of modern life, has gone digital.

Astronomers first recorded their observations on paper, by hand, until the invention of photography. By the early twentieth century, ground breaking discoveries, such as Edwin Hubble’s discovery of other galaxies and the expansion of the universe, were being made with the assistance of photographic plates. As photography evolved, so did astronomy. Today, digital cameras use sensors called CCDs to capture images, as do most telescopes. Now, astronomers can affix sensors to the foci of their telescopes in order to collect high-quality data.

How do CCDs capture astronomical images, assuming you have a telescope and a target for your observations? CCD stands for charge coupled device, a name that gives a hint of how it works. A CCD is sectioned into pixels, or bins, and when exposed to light, the bins gain a charge. The charge on each bin is proportional to the amount of light that it received, so the more charge each bin has, the more light it was exposed to, the brighter the area of the sky it observed. When the exposure is finished, the charge on each bin is read out and converted into a number (count) that represents how much charge built up in each bin. This transforms the image into an array of counts that represents how much light was detected in each pixel of the CCD.

Arrays, simple lists of numbers, are very easy for computers to store, transfer, and manipulate, so they are a useful format for astronomical data. Conversion of images to numbers just isn’t possible with sketches and photographic plates, and opens up new possibilities for handling data, since computers can very easily handle manipulating lists of that form. Some astronomers today work on “training” computers to perform automatic analyses of arrays, so computers can quickly accomplish basic tasks identifying variable stars or Kuiper Belt Objects. Such computer programs are useful, especially with the rise of large scale digital sky surveys that produce enormous quantities of data on a nightly basis.


A small part of a typical array might look like this. While useful to a computer, it’s very difficult for a human brain to figure out what’s going on without some help. In order to understand what’s going on, we can rearrange the numbers a bit. To make things even clearer, we can map count numbers to colors. I’ll pick greyscale, so that we can keep in mind that more counts corresponds to more light.



Unfortunately, CCDs, as powerful and useful as they are, do introduce their own biases into the data, so our image doesn’t look very clean right now. This problem is easy to correct, as long as you are prepared to encounter it. The CCD-introduced bias can be fixed by taking two specific types of pictures, known as darks and flats, which act like a control in a scientific experiment.

The first type of control picture, the dark, is necessary due to thermal noise in the CCD chip. Thermal noise is caused by the heat radiation from the sensor itself, since CCDs are sensitive to infrared light (heat). CCDs in telescopes are often cooled to low temperatures to reduce the effect of this noise, which is present as long as the instrument has a temperature, so it cannot be eliminated completely. To combat this problem, astronomers prepare a dark, which is an exposure of the CCD to a completely lightless environment, a bit like taking a picture with a camera that still has the lens cap attached. This way, the CCD is only exposed to the thermal noise originating from the instrument itself. Here is what a dark might look like:



The second type of image, the flat, is an image taken of a flat field of uniform light. This could be an evenly illuminated surface. Many astronomers will take flats during sunset when the evening sky is bright enough to wash out the background stars, but not so bright that the sensors are overloaded. Since we know the image should be evenly lit, the flat field allows astronomers to pick up systematic defects in the CCD. Due to tiny imperfections during manufacturing, some pixels may be more or less sensitive than average, or the telescope itself might have lens imperfections that concentrate light in different areas of the image. Flat field images let astronomers discover and correct for these effects. A typical flat might look like:



Now that we have our image, dark, and flat field, we can begin to process the data. First, we subtract the dark from the image of the object:



And from that image, we subtract the flat field, giving a nice, clear picture of our target object:



Now that we’ve done initial processing of the image to correct for bias, we can start to do more interesting analyses of the data. One very basic thing we can do is use this image to figure out how bright the object we are looking at is. We can sum up the counts that belong to the object to get a total brightness. In this case, the sum of the object counts is 550. But this number doesn’t mean very much on its own. The object might actually be quite dim, but appears bright because a long exposure was taken, and the CCD had more time to collect light. Or, we could have taken a very short exposure of a very bright object. So, we need to find a reference star of known brightness in our image, and measure that. If we know how bright the object appears compared to the reference star in our images, and we know how bright the reference star is, we can infer the brightness of the object.

If we have taken a picture of the same object in different filters, we can also create false-color images. Filters can be placed in the telescope aperture in order to restrict which wavelengths can pass through the telescope.  Using filters allows astronomers to choose which colors of light will reach the CCD and be counted. To make a false color image, astronomers combine images from two or more different filters. Each separate image is assigned a color according to which filter it was taken in (perhaps blue for ultraviolet light, green for visible light, and red for infrared light), then the images are combined into one.



False color images are useful because the color coding for each filter help draw attention to important differences between the individual images while still allowing astronomers to see the structure of the object in many different filters at once. In planetary science, for instance, different colors in an image might reflect differences in the composition of the surface of a planet, revealing regions of strikingly different geological histories across the whole planet. Images can also be combined to produce “true color” images using filters for different wavelengths of visible light in order to produce pictures that mimic closely what different astronomical objects would look like to human eyes. CCD technology has brought astronomy down to earth, quite literally, by producing images that reveal what the cosmos would look like, if only we could see it as well as our telescopes.

Wednesday, June 22, 2016

How Do You Discover A Planet?

Out in the furthest reaches of our Solar System, twenty times further from the Sun than Neptune, a massive unknown planet may be lurking. At ten times the mass of the Earth, the gravitational pull of the massive planet has herded the orbits of an obscure group of icy objects into a strange alignment. Last January, Caltech scientists Mike Brown and Konstantin Batygin announced the likely existence of this new planet, dubbed Planet Nine, in a paper published in the Astronomical Journal. Though the mysterious planet has yet to be seen through a telescope, evidence of its existence has been growing for years. The next major Solar System discovery is upon us. When astronomers finally capture an image of Planet Nine, it will mark the only discovery of a new planet in our Solar System in living memory. But why do we think Planet Nine is really there?

***

Other than Earth, the first six planets known to humankind were discovered simply by looking up. The closest five, Mercury, Venus, Mars, Jupiter, and Saturn were visible to ancient humans with the unaided eye for thousands of years. Our ancestors, reading the skies night after night, noticed them as bright lights that moved from night to night against the backdrop of fixed stars and named them planets, from the Greek word for wanderer. At this time, the Sun and Moon were also considered planets, for they too wandered among the stars, but as the centuries passed, we came to better understand our place in the universe. In 1543, Copernicus published his heliocentric theory, correctly positioning the planets as our celestial siblings orbiting the Sun just as the Earth does. 

The next big discovery in the solar system came in 1781, when Englishman William Hershel observed Uranus, a planet invisible to the unaided eye, with a massive homemade telescope. He meticulously studied the new planet, night after night, until he realized that it too wandered through the starry skies. Though Uranus had been spotted by previous generations of astronomers, because of its great distance from the sun, the planet’s wandering motion had never been observed, so Uranus was simply assumed to be a star. Even Hershel was skeptical: he initially believed the seventh planet was a comet—after all, no new planets had ever been documented in recorded history—but careful study of its orbit allowed astronomers to conclude Uranus was a planet in its own right.

Neptune and Planet Nine are different. Just as you can infer the presence of a breeze on a windy day by observing tossing trees and dancing leaves outside your window, scientists first detected Neptune and Planet Nine not through direct observation, but through their effects on objects that can be seen. In the 1840s, English and French astronomers predicted Neptune’s location after observing anomalous deviations in the orbit of Uranus. Again, the astronomy community was skeptical at first. But, using mathematics and the laws of physics, the planet’s location in the sky was pinpointed, and telescopes appropriately aligned to the predicted spot quickly sighted the eighth planet, just as promised.

Humanity’s exploration of the outer reaches of the Solar System didn’t end with Neptune. In the last century, astronomers began probing a region of space called the Kuiper Belt, a swath of tiny icy objects just beyond the orbit of Neptune. The giant blue planet dominates the Kuiper Belt gravitationally, shaping the orbits of the nearby Kuiper Belt Objects. For instance, the Kuiper Belt’s most famous resident, Pluto, is locked into its orbit by a gravitational relationship with Neptune known as resonance. For every two orbits Pluto makes, Neptune makes three. This synchronization regulates Pluto’s motion along its orbit, just as a parent regulates the motion of a child on a playground swing by pushing in harmony with the swing.

While looking for Kuiper Belt Objects, astronomers made a peculiar discovery—Sedna, a strange, icy body about half the size of Pluto. Sedna orbits the sun in a highly elliptical path that takes it from twice to thirty times as far from the Sun as Neptune. At these distances, Sedna is much too far away to be a member of the Kuiper Belt, floating peacefully billions of miles away from Neptune’s region of influence. Usually, the smaller members of the Solar System start out with circular orbits and, over time, find themselves on extremely elliptical orbits after close encounters with massive planets. Like a slingshot, big planets inject energy into the orbits of small bodies during close encounters and send them rocketing into strange new orbits or even out of the Solar System entirely. But no known object could have explained how Sedna came to have such a strange orbit, since it never came close to any known planets. So, for years, astronomers assumed its existence spoke to a freak event like a close gravitational encounter with a passing star—a one in a billion anomaly.

Yet similar objects kept being discovered. In astronomer’s parlance, these new Sedna-like objects all had high perihelia and high major axes. That is, their closest approaches (perihelia) to the Sun were well beyond the orbit of Neptune, and the long (major) axes of their oval orbits were many times larger than their shorter (minor) axes. In other words, the objects had highly elliptical orbits, and they never got very close to Neptune or the Sun. Most telling of all, the long axes of the orbits of all of these objects pointed roughly in the same direction, an eerie coincidence Mike Brown described as “like having six hands on a clock all moving at different rates, and when you happen to look up, they're all in exactly the same place.” Since the chances of such an alignment occurring by accident are low, at about 0.007 percent, the Caltech scientists suspected something was missing from current models of the solar system.

A figure describing various orbital parameters for the Sedna-like objects. The orbits of 2007 TG422, Sedna, and 2010 GB174 are displayed. Sedna’s orbit is in a slightly darker color and is labelled. The distance from the sun to the nearest part of Sedna’s orbit is labeled “Perihelion”. The length of the long axis is designated as the “Major Axis” and the length of the short axis is designates as the “Minor Axis”

Orbital Parameters: The orbits of Sedna and two of its sibling objects are shown above. Sedna’s important orbital parameters are labelled. The long axes of the orbits of the three objects point in roughly the same direction, a major clue to Batygin and Brown that our current models of the Solar System might be missing something.

Unlike the astronomers who deduced the existence of Neptune using meticulous calculations performed by hand, Brown and Batygin needed to use computing power to make their predictions. Neptune’s discovery involved predicting the location of an unknown planet based off of deviations in the orbit of just one object, namely Uranus, but Planet Nine’s presence needed to be inferred from a collection of objects with a complicated and chaotic gravitational dynamic. In complex systems like this one, it is much faster and easier to attempt to study the system using the brute force of computation. Brown and Batygin developed a simulation that used trial and error in order to deduce the placement of the hypothetical planet in the solar system. By figuring out which arrangements of planets didn’t produce the observed configuration of the solar system, the researchers could narrow down in which regions of the solar system the new planet might be located.

One run of the simulation might go like this: a possible orbit for Planet Nine is specified in detail, and placed into the known model of the solar system. The simulation begins, and the computer predicts the positions of all the planets, moons, and minor solar system bodies over the course of millions, or even billions, of years. When the simulation ends, the outcome is compared to the observable parameters of the solar system today, including the positions of the planets, the alignment of Sedna and its siblings. Sometimes, the end result is dramatically different from the solar system today. In the wrong alignment, an interaction with Planet Nine could fling one of the other eight planets from the solar system via the slingshot effect. Usually, the differences are more subtle. There might not be any Sedna-like objects, or they are not aligned anymore, or a subtle detail of their orbits doesn’t match the real state of Solar System. If there are any differences between the simulation’s prediction and the observed state of the solar system, the initial guess for Planet Nine’s orbit is ruled out.

Batygin and Brown first hypothesized that Planet Nine would be located on the same side of the Solar System as the anomalous Sedna-like objects. They reasoned those long major axes that puzzled astronomers ought to be pointed towards the location of the unknown planet, as if Planet Nine were shepherding Sedna and company into alignment. However, this initial guess didn’t fit the model perfectly, producing a different alignment than the one observed. Then, in a lucky guess, the researchers started the simulation with Planet Nine on the opposite side of the sun. In this new configuration, with the long axes of the objects’ orbits pointing away from Planet Nine, the output of the simulation perfectly matched what we see in the sky today.

All of the orbits of the Sedna-like objects are pictured, along with a possible orbit for Planet Nine (in a different color). The orbit for Planet Nine is the same as the correct answer (orbit 2) in the previous simulation. Importantly, the long axis of Planet Nine’s orbit is pointed in the opposite direction of the long axes of the Sedna-like objects.

A Possible Orbit for Planet Nine: Batygin and Brown’s model places Planet Nine on the opposite side of the sun from Sedna and its siblings. The long axes of the Sedna-like objects’ orbits point in the opposite direction as the long axis of Planet Nine’s orbit.

Placing Planet Nine on the opposite side of the Solar System from Sedna allows for a stable and accurate configuration of orbits. The model also had an unexpected consequence: it predicted a new class of objects astronomers should expect to see. This new group should contain small bodies with orbits perpendicular to the orbits of Sedna and its siblings. Finding these predicted objects would be a key test for the Planet Nine hypothesis. A scientific model can be useful for explaining observed phenomena, but it derives most of its power from its ability to correctly predict new aspects of a situation. Brown and Batygin scoured the catalogue of known minor members of the outer solar system, and, just as the model suggested, four objects with the predicted orbital properties were found.

*** 
             
Simulations can only take astronomers so far. Though Batygin and Brown have deduced by simulation thus far that Planet Nine is ten times heavier than Earth and completes its orbit in tens of thousands of years, the planet technically hasn’t been discovered, as no astronomer or telescope has spotted it. Direct detection is crucial in hunting for unknown planets in our Solar System. After the discovery of Neptune, astronomers sought to find more planets through similar indirect means. In the early twentieth century, supposed deviations were again found in the orbit of Uranus, leading to the prediction of Planet X. The search for that planet uncovered Pluto. But Pluto turned out to be too small to impact Uranus’s orbit in any way, and the deviations were later discovered to be observational errors. Direct imaging of Planet Nine should be feasible for a planet this close; compared to extrasolar planets orbiting other stars, Planet Nine is in our cosmic backyard. We should be able to spot it. So why haven’t we found it yet?

One possibility is that Planet Nine has already been spotted. Just as Uranus had been mistaken for a star before Herschel discovered its wandering motion in the sky, Planet Nine might be misclassified in some comprehensive sky survey as an obscure star. Planet Nine would be far enough away for this to be a possibility. Even with the most powerful telescopes, it takes 24 hours for Planet Nine to move noticeably far enough to rule out the possibility that the distant point of light is a fixed star. And there are further complications. A large swath of Planet Nine’s orbital path takes it through the plane of the Milky Way. If the planet is located in these unfortunate regions, astronomers will be searching for a faint, wandering dot against the backdrop of some of the most densely star-packed regions of the night sky. The busy background will make it difficult to spot the planet, like looking for a tiny ant crawling across a static-filled television screen.

Despite these challenges, we must continue the search for Planet Nine. While Brown and Batygin hope their prediction is correct, the only way to know for sure is to take to the skies, scanning the predicted orbital path and hoping for a glimpse of the elusive planet. If Planet Nine is really there—and Brown predicts we will know the answer within the year—we will recognize the planet the same way humanity has been recognizing planets for thousands of years. Somewhere up there, Planet Nine may be a faint, faraway dot, but it, just like the other planets in our Solar System, will be a wanderer amongst the silent, distant stars.  

***

Originally written for Caltech's science writing class, En/Wr 84.

Wednesday, June 1, 2016

It's Time to Demote Pluto... Again

Originally published in the California Tech, page 3.

In 2006, the International Astronomical Union (IAU) voted to reclassify Pluto, stripping it of its status as a planet while establishing a new class of celestial bodies: dwarf planets. While it makes sound scientific sense to demote Pluto from its planetary status, I argue that we should go even further by destroying the designation dwarf planet — a category which is both scientifically useless and pedagogically confusing. Instead, we should call Pluto what it really is: a Kuiper Belt Object.

So why, according to the IAU, isn’t Pluto a planet? The IAU has established three criteria for evaluating whether or not an object is a planet or dwarf planet. According to resolution B51, a planet is “a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to … [assume] a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood [sic] around its orbit.” The first criterion is fairly straightforward, separating the notion of planet from the notion of a natural satellite or moon. The second is a statement about the size of an object — a planet must be big enough for gravity to be the dominant force sculpting its shape. And the third is, well, confusing. Unfortunately, the third criterion is critical. It is the qualification Pluto fails and it establishes the difference between a planet and a dwarf planet. Planets have “cleared their neighborhood” and dwarf planets haven’t.

What the “clearing its neighborhood” criterion comes down to is gravitational influence. Planets, especially large ones like Jupiter, have gravitationally dominated their orbits. Anything that passes too close to Jupiter will either crash into the planet or be ejected from the solar system. In this way, Jupiter clears its neighborhood. The same process works for the other seven planets as well. However, Pluto exists in a belt of similar objects — the Kuiper Belt — where its diminutive size is enough to make it round, but not enough to clear its orbital path.

Because this process is inherently gravitational, it means that the IAU criteria establish two separate size thresholds, both of which must be passed in order to be a planet. Dwarf planets, only passing the “roundness” criterion, exist in a sort of in-between size category, a poor consolation prize to satiate angry Plutophiles. The dwarf planet distinction fails on two counts: it groups together objects with very little in common (other than roundness) and it fails to group together objects that share important physical properties and histories.

There are five objects in the solar system that qualify as dwarf planets: problematic Pluto, our own Mike Brown’s Eris, Ceres (the largest object in the asteroid belt) and the two obscure additions of Haumea and Makemake. Pluto, Eris, Haumea and Makemake are all residents of the Kuiper Belt, which lies beyond Neptune’s orbit, while Ceres is located much closer to the sun as the largest resident of the asteroid belt, which is between the orbits of Mars and Jupiter. While the Kuiper Belt Objects (KBOs) in this group have much in common with each other, they are vastly different from Ceres, both in composition and history. Ceres is primarily rocky; the KBOs are icy. Ceres, as a member of the asteroid belt, has primarily been influenced by Jupiter, while the KBOs’ histories are heavily shaped by the influence of Neptune.

Lumping Ceres together with these other objects has real consequences: it leads to the impression that Ceres is located in a completely different region of the solar system. As an astronomy outreach educator, I have encountered several aspiring amateur astronomers who mistakenly believed Ceres orbited beyond Neptune. The category dwarf planet, then, is misleading, a term that obscures truth.

A categorization that makes more sense is to group Ceres with objects that share its composition and history — the asteroids. Ceres may be an exceptionally large member of the asteroid belt, but this does not warrant the distinction of “dwarf planet.” Similarly, Pluto, Eris and their lesser known cousins should be classified alongside the rest of the Kuiper Belt Objects, with which they have more in common than with outlier Ceres.

Superstar astrophysicist Neil deGrasse Tyson advocates for a similar zone-like division of the solar system2, separating asteroids and Kuiper Belt Objects. He even goes as far as splitting the planets into two categories — inner planets and outer planets — a division that again reflects the shared composition and history of the rocky planets and gas giants. He implemented this categorization in his design for the solar system exhibit in the Hayden Planetarium before the discovery of Eris, before Pluto’s demotion was even up for discussion.

But this division makes pedagogical sense. Simply memorizing a list of planets is not instructive and leaves learners with a rigid and inflexible understanding of science, as the backlash against Pluto’s reclassification demonstrates. Teaching the solar system as a collection of different classes of objects opens up a more flexible understanding of science and leads naturally to scientifically relevant questions. Why, for instance, is it that the inner planets and outer planets are separated by belt of asteroids? Why are inner planets rocky and outer planets gassy? Such inquiries are more reflective of the true nature of science. Science is not simply a list of facts, but a systematic way of asking questions and organizing knowledge. Shouldn’t we, as scientists, strive for terminology that accurately reflects the exciting, ever-changing processes by which we discover it in the first place?

***

I originally wrote this piece for En/Wr 84, Caltech's science writing course. I plan on posting more material from that class soon.

1 You can read about the IAU definition here.
2 See Tyson's The Pluto Files

Sunday, January 24, 2016

Blinky-Blinky: How to Discover Kuiper Belt Objects

The Remote Observing Facility doesn't look special from the outside. It's one of many featureless white doors on the maze-like first floor of Caltech's Cahill Center for Astronomy and Astrophysics. Tonight, though, it's the only door propped open, and from down the hallway, you can hear the voices of its occupants, busy setting up for the next twelve hours of work. When work begins for the night, it's 8 pm, Pacific Time--that's 6 pm, in Hawai'i, where the Keck Telescope is located.

Inside the windowless room, there are three digital clocks. The first gives the current Greenwich Mean Time, a number that is dutifully recorded at the beginning of each exposure of the telescope. Another gives the time in Hawai'i, keeping track of sunset, sunrise, and twilight on the distant island. Finally, the clock in the middle keeps track of the local time in Pasadena, the only real link to the rhythms of daily life outside of the strange limbo-like atmosphere of the office.

The first step is to turn on and calibrate the instruments and run through a series of checklists for the telescope. Under the row of clocks, a webcam whirs to life, and three panels on the monitor below it blink on. The first is dark--later, when observing starts, it connects us to the telescope operator. She sits at the summit of Mauna Kea, moves the telescope into position, and sets up guidance and tracking so that the telescope stays pointed in a fixed direction as the Earth rotates beneath it. The second shows the telescope technician, located at the base of the mountain and acts as IT for the night. The last one is an image of me and the other occupants of the ROF.

Observing officially begins at the end of astronomical twilight. The targets are potential Kuiper Belt Objects1, identified by another telescope on a previous night and picked out by a computer as likely candidates. The idea behind these detections is what researcher Mike Brown calls "blinky-blinky," observe a patch of sky at two different times, and see if anything has moved. Look a third time, just to make sure the apparent movement wasn't caused by random noise in the instrument. If the object is seen in three different places, along a straight line, there's a good chance what you've found is real.

This is the same method Clyde Tombaugh used to discover Pluto, only he did it by literally blinking between two images. Today, Pluto-killer Mike Brown has autonomous programs to do it for him. For any given candidate object, the computer even spits out potential distances and orbits. Intuitively, this makes sense: objects that are farther away appear to move more slowly across the sky per unit time.

Looking at follow up targets several months later allows for a confirmation of candidates' existence, as well as a narrowing down of orbital properties. Observations fall into a particular routine. Every two minutes, we move onto a new target. We press a button to expose the telescope--it's essentially the same process as taking a long exposure image with a digital camera--and note the time. Two minutes later, we reposition the telescope, get an "okay"from the summit, and expose again. Every so often, the telescope gets re-aligned with a guide star of known position, and observations resume. Research continues like this for the next eight hours. Once we get two exposures of the same object, we can do our own makeshift "blinky-blinky" to get a first look at the data as it arrives.

Luckily for me, finals week means I leave the ROF early after only half a night of observing. Mike Brown2 and the rest of his team stay up until dawn, searching the clear Hawai'ian skies for distant worlds.

1 Kuiper Belt Objects are icy bodies that orbit near Neptune and beyond. When you think about Kuiper Belt Objects, think about Pluto-like objects.
2 Mike Brown also does some neat research on Europa.

Wednesday, August 26, 2015

Cross Sectional Astronomy

My research this summer came to an end last week with a seminar I presented at along with many other students in Caltech's Summer Undergraduate Research Program. In addition to presenting my work with Monte Carlo simulations, I also attended talks given by other students doing research in astronomy and physics.

Many of the astronomy projects I learned about focused on creating software for recognizing and analyzing different astronomical phenomena, from variable stars to pulsars and contact binary systems. Many large-scale sky surveys, such as the Palomar Transient Factory and the Sloan Digital Sky Survey, produce a wealth of data on astronomical objects. Computers are often the best way to analyze the abundance of data produced by these surveys in order to identify interesting targets for follow-up study. But why do astronomers need these huge sky surveys and millions of target objects to study?

Analyzing how any population changes over time, whether it is a population of people, stars, or starfish, is a common problem in many areas of science. It can be a tricky problem too, especially when trying to tease out correlation and causation from subtle differences between subgroups of the population. There are two main study methodologies for dealing with this problem: longitudinal studies and cross sectional studies.

Longitudinal studies are the intuitive approach to learning how a population changes over time: just watch as the population (or more realistically, a random sample of the population) evolves naturally. It makes sense, but it's difficult in a lot of situations. For example, longitudinal studies of humans take dedication and decades of research. For phenomena with long lifespans, such as stars, this type of study is simply impossible--the stars vastly outlast human lives and even human civilizations!

Cross sectional studies instead study many individuals in the population at the same time. Each individual represents an individual in a slightly different stage of evolution, with slightly different characteristics; a random sample provided by nature. In humans, an example of a cross sectional study is gathering pictures of many different individuals at different ages in order to examine how appearance changes with age.

Since astronomers only have access to a snapshot of the universe as it appears today, cross sectional studies are what astronomers use to study populations of stars. The most famous example of a cross sectional study is the Hertzsprung-Russel diagram, a plot that correlates star surface temperatures (or colors) with their luminosities. The diagram shows stars in different stages of their evolution, from main sequence stars to red giants and white dwarves, along with stars in transitional states between these major milestones.With the diagram, we can trace the development of different types of stars, and how this development changes with different intrinsic properties of the star (mass turns out to be the most important property in determining the ultimate fate of a star).

There are some problems with the cross sectional approach. For example, age itself may correlate with the evolution of the population in question. In the human example, improving health as time goes on might manifest itself in physical differences, such as an increase in height, between generations that are not caused by the aging process itself. In astronomy, a star that is now nearing the end of its life formed in a quite different universe than a protostar that has just reached the main sequence. We know from theoretical models that the concentration of metals in the universe has increased with time as stars convert hydrogen and helium into heavier elements. Luckily, we can attempt to correct for these effects. Due to the finite speed of light and the vast size of the universe, by looking further and further away, we effectively look back in time. This can help us to determine how conditions were different for older stars when they formed, when compared to stars which are forming today.

Having a large sample size is important in a cross sectional study because it ensures that a representative sample is available and than no important features of the population will be missed. Cross sectional methods and large samples provided by surveys help astronomers to discover how stars age, correlate properties among different populations of stars, and provide experimental confirmation of hypotheses for many types of astronomical objects. There is still much to be learned about a variety of astronomical systems--stars, planets, and more.

Thursday, July 9, 2015

How Do You Build an X-Ray Telescope?

X-ray telescopes are tricky to engineer. With such high energy and short wavelengths, x-rays can pass through nearly any material we throw at them, making it very difficult to make mirrors that can direct and focus light into a useful image. This is because x-rays, like any type of electromagnetic wave, can interact and scatter off of objects similar in size to the wavelength of the wave itself.1 X-rays have such short wavelengths that they do not interact with most atoms and atom-sized objects--not because the wavelength is too big to interact, but because the wavelength is too small! X-rays can pass through the spaces between atoms in many materials.

One way around this problem is to use mirrors that intercept light at very, very high angles of incidence so that the x-rays merely graze the surface of the mirror. This decreases the effective spacing between the atoms as seen by an incoming x-ray.2 Many mirrors for x-ray telescopes are designed using rings of thin foils arranged so that the surface of the foil is nearly parallel (but not quite!) to the telescope's viewing direction.

This method doesn't work for very high energy x-rays, called "hard" x-rays. Instead, telescopes use a coded mask, which is a screen which blocks or admits x-rays in a very specific pattern. This pattern is then projected onto the telescope detector, like a shadow. By comparing the pattern on the screen and the pattern detected by the instrument, the position of the x-ray source can be quickly determined. Specifically, given the shift between the detected shadow, the actual position of the screen, and some trigonometry, you can determine the celestial coordinates of the source.

Often, coded masks have a very particular grid pattern for blocking light.3 Why is this? It helps make identifying the shift between the shadow and the screen easier to identify. Imagine that, in order to figure out the shift, you have two pictures, one of the screen, and one of the detected pattern, lying on top of each other. You are allowed to move the picture of the detected pattern relative to the screen, but instead of seeing if the two pictures match each other, you only get a number telling you the percentage of "matches" (places where dark is on top or dark or light is on top of light). When this percentage reaches 100% you can be sure that the displacement of the detected pattern is the correct shift for determining the position of the source in the sky.

If, instead of using a coded mask, the screen was simply randomly generated, what percentage readings would we expect from trying to match up the pictures? The percentage would still be 100% when the two images were aligned, but away from the peak, the percentages would vary greatly and unpredictably. In one position, 60% of the image might match, while shifted slightly to the left, only 1% would match. This makes detecting the position of the best match difficult, especially when instead of knowing the percentage match, you only know how much better a match one position is compared to nearby positions. Relative readings like this are more representative of the problem, since sometimes portions of the shadow will miss the detector entirely.

Instead, coded masks are designed so that the percentage of matches is constant for every position except for the position that represents the best fit. This way, the position where the shadow image best matches the screen pattern is very easy to identify.

I learned about these interesting design considerations from my colleagues at Caltech's Space Radiation Laboratory. Some of the other SURF researchers I have met this summer are working on X-ray telescopes. I was surprised to find similarities in the design of x-ray telescopes to problems I had been tackling as part of my research. The detector I am working on has two layers. By combining information from where the incoming particles hit on both layers of the detector, the direction of the incoming particles can be determined. Instead of using a screen to block out light, the first layer of the detector directly locates particles, rather than than creating a familiar pattern. However, the readings of the second detector can almost be thought of as a shadow of the reading on the first detector, shifted by some amount that depends on particle trajectories.

1 I discuss how this affects optical telescopes in my post about Palomar.
2 If you're having trouble imagining this, think of if the x-ray came at the mirror edge on. It would appear as if all the atoms overlapped along the same line of sight. A few degrees from edge on, there is still a considerable amount of overlap. 
3 You can see a few nice examples and a great explanation of coded masks in this video.

Sunday, May 24, 2015

A Trip to Palomar

Today, I visited the Palomar Observatory in the mountains north of San Diego. Palomar has an extensive history of astronomical discovery throughout the twentieth century, and continues to be in use today. The observatory is home to a massive 200 inch telescope built and operated by Caltech. The size of the telescope—200 inches—refers to the diameter of the primary mirror of the telescope, and is a good measure of a telescope’s light collecting power. A series of five other mirrors help to focus the light and direct it to various instruments, including a spectrometer, the housing of which I was allowed to climb inside! The entire assembly itself is housed in a massive dome with the same diameter of the ancient Roman Pantheon. Our tour guide stressed that the huge dome and extensive support structures were all designed to protect and align a thin layer of aluminum weighing only five grams in total.



Unlike the everyday mirrors in bathrooms which owe their reflectivity to silver surfaces, Palomar uses aluminum to create its mirrors. Silver mirrors use a simple chemical process to coat glass, called the Tollen's test. At Palomar, aluminum deposition onto its glass primary is carried out in a precisely controlled vacuum environment in order to ensure the mirror is devoid of imperfections. When making telescopes, minuscule imperfections can be a big problem. Any deviation from a perfectly parabolic surface will scatter or blur the valuable image the telescope aims to collect. Imperfections of sizes comparable to the wavelength of the observed light (in this case, visible light, which is several tenths of a micron in wavelength) can compromise the instrument. Much care is taken in order to hunt down these tiny flaws on a giant mirror for this reason. Every two to three years, the aluminum on the mirror is carefully stripped and recoated using the same high-precision process in order to repair the accumulation of dust and foreign material (read: bird droppings) that accumulate from nightly use. 

Below are some panoramas I took from various locations under the dome. The primary mirror is located under the big structure and is currently pointed straight up. The large cylindrical tank is the vacuum chamber where the mirror is repaired.


Here is a view from the south end of the telescope. The hole on the left is where I got to enter the telescope. The cage on the top of the telescope is the observing platform, and is separated from the rest of the telescope in order to isolate the vibrations of whoever was observing. Today, electronic instruments take data instead of astronomers' eyes.