Halos Gone MAD

Distribution of dark matter when the Universe was about 3 billion years old, obtained from a numerical simulation of galaxy formation. The left panel displays the continuous distribution of dark matter particles, showing the typical wispy structure of the cosmic web, with a network of sheets and filaments, while the right panel highlights the dark matter halos representing the most efficient cosmic sites for the formation of star-bursting galaxies with a minimum dark matter halo mass of 300 billion times that of the Sun. Credit: VIRGO Consortium/Alexandre Amblard/ESA

[/caption]

One of the successes of the ΛCDM model of the universe is the ability for models to create structures of with scales and distributions similar to those we view in the universe today. Or, at least that’s what astronomers tell us. While computer simulations can recreate numerical universes in a box, interpreting these mathematical approximations is a challenge in and of itself. To identify the components of the simulated space, astronomers have had to develop tools to search for structure. The results has been nearly 30 independent computer programs since 1974. Each promises to reveal the forming structure in the universe by finding regions in which dark matter halos form. To test these algorithms out, a conference was arranged in Madrid, Spain during the May of 2010 entitled “Haloes going MAD” in which 18 of these codes were put to the test to see how well they stacked up.

Numerical simulations for universes, like the famous Millennium Simulation begin with nothing more than “particles”. While these were undoubtedly small on a cosmological scale, such particles represent blobs of dark matter with millions or billions solar masses. As time is run forwards, they are allowed to interact with one another following rules that coincident with our best understanding of physics and the nature of such matter. This leads to an evolving universe from which astronomers must use the complicated codes to locate the conglomerations of dark matter inside which galaxies would form.

One of the main methods such programs use is to search for small overdensities and then grow a spherical shell around it until the density falls off to a negligible factor. Most will then prune the particles within the volume that are not gravitationally bound to make sure that the detection mechanism didn’t just seize on a brief, transient clustering that will fall apart in time. Other techniques involve searching other phase spaces for particles with similar velocities all nearby (a sign that they have become bound).

To compare how each of the algorithms fared, they were put through two tests. The first, involved a series of intentionally created dark matter halos with embedded sub-halos. Since the particle distribution was intentionally placed, the output from the programs should correctly find the center and size of the halos. The second test was a full fledged universe simulation. In this, the actual distribution wouldn’t be known, but the sheer size would allow different programs to be compared on the same data set to see how similarly they interpreted a common source.

In both tests, all the finders generally performed well. In the first test, there were some discrepancies based on how different programs defined the location of the halos. Some defined it as the peak in density, while others defined it as a center of mass. When searching for sub-halos, ones that used the phase space approach seemed to be able to more reliably detect smaller formations, yet did not always detect which particles in the clump were actually bound. For the full simulation, all algorithms agreed exceptionally well. Due to the nature of the simulation, small scales weren’t well represented so the understanding of how each detect these structures was limited.

The combination of these tests did not favor one particular algorithm or method over any other. It revealed that each generally functions well with regard to one another. The ability for so many independent codes, with independent methods means that the findings are extremely robust. The knowledge they pass on about how our understanding of the universe evolves allows astronomers to make fundamental comparisons to the observable universe in order to test the such models and theories.

The results of this test have been compiled into a paper that is slated for publication in an upcoming issue of the Monthly Notices of the Royal Astronomical Society.

Finding the Failed Supernovae

Recipe for a pair instability supernova. It is hypothesised that in extremely massive stars, gamma rays radiating from the core become so energetic that they can undergo pair production after interaction with a nucleus. Essentially, the gamma ray creates a paired particle and antiparticle (commonly an electron and a positron). The loss of radiation pressure as gamma rays convert to particles results in gravitational collapse of the star's core - and kaboom! Credit: chandra.harvard.edu

[/caption]

When high mass stars end their lives, they explode in monumental supernovae. But, when the most massive of these monsters die, theory has predicted that they may not even reveal as much as a whimper as their massive cores implode. Instead, the implosion occurs so quickly, that the rebound and all photons created during it, are immediately swallowed into the newly formed black hole. Estimates have suggested that as much as 20% of stars that are massive enough to form supernovae collapse directly into a black hole without an explosion. These “failed supernovae” would simply disappear from the sky leaving such predictions seemingly impossible to verify. But a new paper explores the potential for neutrinos, subatomic particles that rarely interact with normal matter, could escape during the collapse, and be detected, heralding the death of a giant.

Presently, only one supernova has been detected by its neutrinos. This was supernova 1987a, a relatively close supernova which occurred in the Large Magellanic Cloud, a satellite galaxy to our own. When this star exploded, the neutrinos escaped the surface of the star and reached detectors on Earth three hours before the shockwave reached the surface, producing a visible brightening. Yet despite the enormity of the eruption, only 24 neutrinos (or more precisely, electron anti-neutrinos), were detected between three detectors.

The further away an event is, the more its neutrinos will be spread out, which in turn, decreases the flux at the detector. With current detectors, the expectation is that they are large enough to detect supernovae events around a rate of 1-3 per century all originating from within the Milky Way and our satellites. But as with most astronomy, the detection radius can be increased with larger detectors. The current generation uses detectors with masses on the order of kilotons of detecting fluid, but proposed detectors would increase this to megatons, pushing the sphere of detectability to as much as 6.5 million light years, which would include our nearest large neighbor, the Andromeda galaxy. With such enhanced capabilities, detectors would be expected to find neutrino bursts on the order of once per decade.

Assuming the calculations are correct and that 20% of supernova implode directly, this means that such gargantuan detectors could detect 1-2 failed supernovae per century. Fortunately, this is slightly enhanced due to the extra mass of the star, which would make the total energy of the event higher, and while this wouldn’t escape as light, would correspond to an increased neutrino output. Thus, the detection sphere could be pushed out to potentially 13 million lightyears, which would incorporate several galaxies with high rates of star formation and consequently, supernoave.

While this puts the potential for detections of failed supernovae on the radar, a bigger problem remains. Say neutrino detectors record a sudden burst of neutrinos. With typical supernovae, this detection would be quickly followed with the optical detection of a supernova, but with a failed supernova, the followup would be absent. The neutrino burst is the beginning and end of the story, which could not initially positively define such an event as different from other supernovae, such as those that form neutron stars.

To tease out the subtle differences, the team modeled the supernovae to examine the energies and durations involved. When comparing failed supernovae to ones forming neutron stars, they predicted that the failed supernovae neutrino bursts would have shorter durations (~1 second) than ones forming neutron stars (~10 seconds). Additionally, the energy imparted in the collision that makes up the detection would be higher for failed supernovae (up to 56 MeV vs 33 MeV). This difference could potentially discriminate between the two types.

Profile of a Lonely Galaxy

KK 246 - A dwarf galaxy isolated in the Local Void

[/caption]

The vast majority of galaxies exist in clusters. These clusters are joined on larger scales by filaments and sheets of galaxies, between which, gigantic galactic voids are nearly entirely free of galaxies. These voids are often hundreds of million of light years across. Only rarely does a lonely galaxy break the emptiness. Our own Milky Way rests in one of these large sheets which borders the Local Void which is nearly 200 million light years across. In that emptiness, there have been tentative identifications of up to sixteen galaxies, but only one has been confirmed to actually be at a distance that places it within the void.

This dwarf galaxy is ESO 461-36 and has been the target of recent study. As expected of galaxies within the void, ESO 461-36 is exceptionally isolated with no galaxies discovered within 10 million light years.

What is surprising for such a lonely galaxy is that when astronomers compared the stellar disc of the galaxy with a mapping of hydrogen gas, the gas disc was tilted by as much as 55°. The team proposes that this may be due to a bar within the galaxy acting as a funnel along which gas could accrete onto the main disc. Another option is that this galaxy was recently involved in a small scale merger. The tidal pull of even a small satellite could potentially draw the gas into a different orbit.

This disc of gas is also unusually extended, being several times as large as the visual portion of the galaxy. While intergalactic space is an excellent vacuum, compared to the space within voids it is a relatively dense environment. This extreme under-density may contribute to the puffing up of the gaseous disc, but with the rarity of void galaxies, there is precious little to which astronomers can compare.

Compared with other dwarf galaxies, ESO 461-36 is also exceptionally dim. To measure brightness, astronomers generally use a measure known as the mass to light ratio in which the mass of the galaxy, in solar masses, is divided by the total luminosity, again using the Sun as a baseline. Typical galaxies have mass to light ratios between 2 and 10. Common dwarf galaxies can have ratios into the 30’s. But ESO 461-36 has a ratio of 89, making it among the dimmest galaxies known.

Eventually, astronomers seek to discover more void galaxies. Not only do such galaxies serve as interesting test beds for the understanding of galactic evolution in secular environments, but they also serve as tests for cosmological models. In particular the ΛCDM model predicts that there should be far more galaxies scattered in the voids than are observed. Future observations could help to resolve such discrepancies.

Unidentified Triangles

Three dots of light in the sky above Lafayette, Colorodo.
Yes. These 3 dots are really all the "evidence" you get.

[/caption]

Apparently I have a reputation as a debunker. When I first started writing for Universe Today, Fraser told me to feel free to do articles relating to skepticism. I haven’t much, but I’ve been asked to cast a skeptical eye on the topic of UFOs and aliens, especially given a recent sighting which made it onto Good Morning America.

My general opinion on UFOs is that there’s really just not enough evidence to say whether or not the people making claims about them are right. In fact, there’s so little coherent evidence that it’s more apt to say that they’re “not even wrong“. In such cases, I generally find the topic uninteresting and not worthy of attention. I could address them as an exercise with Occam’s razor, but that’s been done to death. Instead, there needs to be something else that makes the topic worthy of addressing. Coincidentally, this case does.

Typically, there’s two additional reasons I’ll discuss such a topic. The first is if such baseless belief causes demonstrable harm (such as recent doomsday criers convincing people to give up their homes and family to go on a fire and brimstone tour of the US to proclaim The End). With UFO buffs, this isn’t a concern generally.

The other reason I’ll discuss something is if I notice a particular logical fallacy that’s worth exploring in its own right. In watching a few of the videos related to the one shown on Good Morning America, I found another one that I think does a good job of highlighting the willingness to jump to conclusions. In this clip, an awestruck spectator is stunned by the lights because they form “a perfect triangle”. I’m teaching a geometry course this semester and I’ve been dealing a lot with triangles, but I’m not quite sure what he means. By definition, a triangle is simply a polygon with three sides, which meet at three points. Pick 3 points anywhere and you’ll be able to form a triangle by connecting the dots. Thus, all you need to form a “perfect” triangle is 3 points. There’s nothing inspiring about that.

To give the guy as much credit as possible, I’ll assume that the guy meant “equilateral” which would mean that each side is perfectly equal. This would be slightly more interesting. It would mean they were each affixed to a larger body to keep them at just the right distance, or, they were each manipulated independently to remain in the right formation. Still, neither of these tasks is especially impressive (I’m more impressed by the Blue Angels keeping formation at supersonic speeds), but before we need to consider that, we should be asking an even more fundamental question: Is the triangle actually equilateral?

Quickly taking a screen cap and importing it into a drawing program in which I can trace on some lines shows immediately that it doesn’t look at all equilateral. But there’s a good reason for that: We’re seeing it at an inclination and objects will look very different depending on your particular point of view . What we’re really seeing is a two-dimensional projection of a shape in three-dimensions. The closer to the plane of the triangle you put your eye, the flatter it looks. Rotate it and the third point will seem to shift relative to the other two. In other words, we could very easily have an equilateral triangle projected in such a way that it looked just like the one the spectators saw. But at the exact same time, any triangle, equilateral or not, could be viewed in such a way to replicate that projected shape.

Why then, did this fellow claim it was a “perfect triangle”? Simple: He had prior expectations. He couldn’t know, but mentally, he could envision it being “perfect” and his mind seized on that solution, ignoring all others and manufacturing details that didn’t necessarily follow from the observations. Sound familiar?

Ultimately, we can’t say what these lights were (although I find the road flares on balloons explanation to be simple and fit perfectly with all observations thus passing the test of parsimony). And I think that’s the important note: We don’t know. But let’s at least be knowledgeable and honest enough to admit what we don’t.

Companion Stars Could Cause Unexpected X-Rays

Many types of main sequence stars emit in the X-ray portion of the spectra. In massive stars, strong stellar winds ripping through the extended atmosphere of the star create X-ray photons. On lower mass stars, magnetic fields twisting through the photosphere heat it sufficiently to produce X-rays. But between these two mechanisms, in the late B to mid A classes of stars, neither of these mechanisms should be sufficient to produce X-rays. Yet when X-ray telescopes examined these stars, many were found to produce X-rays just the same.

The first exploration into the X-ray emission of this class of stars was the Einstein Observatory, launched in 1978 and deorbited in 1982. While the telescoped confirmed that these B and A stars had significantly less X-ray emission overall, seven of the 35 A type stars still had some emission. Four of these were confirmed as being in binary systems in which the secondary stars could be the source of the emission, leaving three of seven with unaccounted for X-rays.

The German ROSAT satellite found similar results, detecting 232 X-ray stars in this range. Studies explored connections with irregularities in the spectra of these stars and rotational velocities, but found no correlation with either. The suspicion was that these stars simply hid undetected, lower mass companions.

In recent years, some studies have begun exploring this, using telescopes equipped with adaptive optics to search for companions. In some cases, as with Alcor (member of the popular visual binary in the handle of the big dipper), companion stars have been detected, absolving the primary from the expectation of being the cause. However, in other cases, the X-rays still appear to be coming from the primary star when the resolution is sufficient to spatially resolve the system. The conclusion is that either the main star truly is the source, or there are even more elusive, sub-arcsecond binaries skewing the data.

Another new study has taken up the challenge of searching for hidden companions. The new study examined 63 known X-ray stars in the range not predicted to have X-ray emission to search for companions. As a control, they also searched 85 stars without the anomalous emission. This gave a total sample size of 148 target stars. When the images were taken and processed, it uncovered 68 candidate companions to 59 of the total objects. The number of companions was greater than the number of parent stars since some look to exist in trinary star systems or greater.

Comparing the percent of companions around X-ray stars to those that didn’t, 43% of the X-ray stars appeared to have companions, while only 12% of normal stars were discovered to have them. Some of the candidates may be the result of chance alignments and not actual binary systems giving an error of about ±5%.

While this study leaves some cases unresolved, the increased likelihood of X-ray stars to have companions suggests that the majority of cases are caused by companions. Further studies by X-ray telescopes like Chandra could provide the angular resolution necessary to ensure that the emissions are indeed coming from the partner objects as well as search for companions to even greater resolution.

The Universe Verse Continues – It’s Alive!

Back in 2009, I was given an odd book. It was the Universe Verse: Book One. In it, the author illustrates the formation of the universe, from the Big Bang, to the formation of stars and galaxies in rich detail and painstaking attention to the tiniest of scientific facts. And to top it off, it’s all done in rhyme as if Carl Sagan met Dr. Seuss. But as the title indicates, it was just the first of the series. In total, the author, James Lu Dunbar, is planning three books and at long last, the second in the Universe Verse trilogy is ready for release. And we’ve got a sneak peek!

The previous book (available to preview on the author’s website) ended with the formation of heavy atoms in the cores of stars and supernovae. “It’s Alive!” begins with the formation of planets from these elements. It explains the formation of primordial oceans and the atmosphere and introduces abiogenesis. It takes the reader through the fundamentals of random mutations leading to natural selection, formation of amino acids, and biodiversity.

This chapter in the saga leaves off with life still quite simple, still at the bacterial level, but with hints at what it will become (the province of the next book). As with the previous book, this one is lavishly illustrated, but unlike its predecessor, it’s in color. This was all thanks to a series of pledges James received to continue his project, netting him six times more than the amount requested!

Like the last book, this one can be previewed free online, but to go even further, James is releasing the book as a free eBook. All you have to do is send him an Email (address on his website) for a high resolution .pdf copy! He encourages anyone interested to request it since “Everyone, especially children, should have the opportunity to read this story.” For even more behind the scenes with this book, James chronicled the making of the book, complete with rough draft pages on his blog.

For those interested in purchasing the book, it will available for purchase in paperback on April 3rd of this year. Preorders are available here.

Exoplanet May Have Metal-Rich Atmosphere

Artist’s impression of GJ 1214b
Artist’s impression of GJ 1214b

[/caption]

At first glance, GJ 1214b is just another of the growing number of the super-Earth class of exoplanets. Discovered by the MEarth Project in 2009, it orbits an M dwarf in Ophiuchus in a tight orbit, swinging the planet around every 1.6 days. Late last year, GJ 1214b became the first super-Earth to have a component of its atmosphere detected when astronomers compared its spectra to models finding broad agreement with water vapor present. New work, done by the same team, further refines the atmosphere’s potential characteristics.

Previously, the team suggested that their observations could potentially fit with two hypothetical planet models. In the first, the planet could be covered in hydrogen and helium, but the lack of absorption features in the atmosphere’s spectra suggested that this were not the case unless this layer were hidden by thick clouds. However, from the data available, they could not conclusively rule out this possibility.

Combining their old observations with more recent ones from the MEarth Observatory, the team now reports that they have been able to rule out this scenario with a 4.5 σ confidence (over 99.99%). The result of this is that the remaining model, which contains higher amounts of “metals” (astronomy speak meaning all elements with atomic numbers higher than helium). The team also continues to support their earlier conclusion that the atmosphere is most likely at least 10% water vapor by volume, stating this with a 3 σ (or 99.7%) confidence based on the new observations. While water vapor may sound give the impression of being an inviting place for a tropical jungle, the team predicts the close orbiting planet would be a sweltering 535 degrees Fahrenheit.

While these findings are interesting stories of the atmosphere, the prevalence of such heavy elements may also give information relating to the structure and history of the planet itself. Models of planetary atmosphere suggest that, for planets of the mass and temperature expected for GJ 1214b, there are two primary formation scenarios. In the first, the atmosphere is directly accreted during the planet’s formation. However, this would indicate a hydrogen rich atmosphere and has been ruled out. The second is that the planet formed further out, beyond the “snow line”, as an icy body, but moved in after formation, creating the atmosphere from sublimated ices.

Although outside of the scope of their atmospheric research, the team also used the timing of the transits to search for wobbles in the orbit that could be caused by additional planets in the system. Ultimately, none were discovered.

STEREO Looks at the Sun; Finds Planets

STEREO spacecraft. Credit: NASA

[/caption]

The primary mission of the twin STEREO probes is to explore the 3-dimensional makeup of our Sun. Each craft carries a variety of instruments. One of them, the Heliospheric Imager (HI), doesn’t look directly at the Sun, but rather, explores a wide field near the Sun in order to explore the physics of coronal mass ejections (CMEs), in particular, ones aimed at the Earth. But while not focusing on solar ejections, the HI is free to make many other observations, including its first detection of an extrasolar planet.

As the Heliospheric Imager stares at the space between the Earth and Sun, it has made many novel observations. The device first opened its shutters in 2006 the instrument has observed the interaction of CMEs with the atmosphere of Venus, the stripping of a tail of a comet by a CME, atomic iron in a comet’s tail, and “the very faint optical emission associated with so-called Corotating Interaction Regions (CIRs) in interplanetary space, where fast-flowing Solar wind catches up with slower wind regions.”

The spacecraft allows for long periods of time to stare at patches of sky as the satellites precede and follow Earth in its orbit. The spacecraft is able to take pictures roughly every 40 minutes for almost 20 days in a row giving excellent coverage. As a result, the images taken have the potential to be used for detailed survey studies. Such information is useful for conducting variable star studies and a recent summary of findings from the mission reported the detection of 263 eclipsing variable stars, 122 of which were not previously classified as such.

Another type of variable star observed by the STEREO HI, was the cataclysmic sort, in particular, V 471 Tau. This red giant/white dwarf binary in the Hyades star cluster is a strong source of interest for stellar astrophysicists because the system is suspected to be a strong candidate for a type Ia supernova as the red giant dumps mass onto its high mass, white dwarf companion. The star system is extremely erratic in its light output and observations could help astronomers understand how such systems evolve.

Although planetary hunting is at the very edge of the capabilities of the HI’s limitations, eclipses caused by planet sized objects are feasible for many of the brighter stars in the field of view as dim as approximately 8th magnitude. Around one star, HD 213597, the STEREO team reported the detection of an object that seems too small to be a star based on the light curve alone. However, follow up studies will be necessary to pin down the object’s mass more accurately.

Another Ceasing Cepheid

a

[/caption]

Earlier this year, I wrote an article about a Cepheid variable star named V19 in M31. This Cepheid was one that once pulsated strongly and was one of the variables Hubble first used to find the distance to the Andromeda galaxy. But today, V19 is a rare instance of a Cepheid that has seemingly, stopped pulsating. Another example of this phenomenon is that of Polaris, which has decreased in the amplitude of brightnesses by nearly an order of magnitude in the past century, although some reports indicate that it may be beginning to increase again. Meanwhile, a new paper is looking to add another star, HDE 344787, to this rare category and according to the paper, it may be “even more interesting than Polaris”.

The star in question, HDE 344787, is a F class supergiant. Although the variations in brightness have been difficult to observe, due to their small amplitude, astronomers have revealed two fundamental pulsation modes corresponding to 5.4 days and 3.8 days. But perhaps even more interesting, is that the 5.4 day period seems to be growing. Careful analysis of the data suggests that this period is growing by about 13 seconds per year. This finding is in strong agreement with what is predicted by models of stellar evolution for stars with metallicity similar to the sun passing through the instability strip for the first time.

HDE 344787 is similar in Polaris in that both stars share the same spectral type. However, the existence of two modes of pulsation is not seen in Polaris. The lengthening of the period of pulsation, however, is seen. For Polaris, its variation is growing by 4.5 seconds per year. Another similarity is that, like Polaris and V19, has been decreasing in the amplitudes of its brightness since at least 1890.

While the addition of this star to the collection of Cepheids that have decreased their amplitude, it does little to solve the mystery of why they might do so. Currently, both Polaris and HDE 344787 lie near the middle of the instability strip and, as such, are not simply evolving out of the region of instability. However, the confirmation of second pulsational mode may lend support to the notion that a change in one of these modes may serve to dampen the other, creating an effect known as the Blazhko Effect.

Ultimately, this star will require further observations to understand its nature better. Due the the faintness of this star (~10th magnitude) as well as the small change in brightness from the pulsations and the dense stellar field on which it lies, observations have been notoriously challenging.

Plausibility Check – Habitable Planets around Red Giants

Betelgeuse is a red giant star easily visible in our night sky. Betelgeuse is actally a red super-giant, meaning it has enough mass that it will end as a supernova, rather than as a white dwarf with a planetary nebula. Image credit: Hubble Space Telescope
Betelgeuse is a red super-giant, meaning it has enough mass that it will end as a supernova, rather than as a white dwarf with a planetary nebula. New research suggests that the star could've consumed a smaller companion star. Image credit: Hubble Space Telescope

[/caption]

While planets orbiting twin stars are a staple of science fiction, another is having humans live on planets orbiting red giant stars. The majority of the story of Planet of the Apes takes place on a planet around Betelgeuse. Planets around Arcturus in Isaac Asimov’s Foundation series make up the capital of his Sirius Sector. Superman’s home planet was said to orbit a the fictional red giant, Rao. Races on these planets are often depicted as being old and wise since their stars are aged, and nearing the end of their lives. But is it really plausible to have such planets?

Stars don’t last forever. Our own Sun has an expiration date in about 5 billion years. At that time, the amount of hydrogen fuel in the core of the Sun will have run out. Currently, the fusion of that hydrogen into helium is giving rise to a pressure which keeps the star from collapsing in on itself due to gravity. But, when it runs out, that support mechanism will be gone and the Sun will start to shrink. This shrinking causes the star to heat up again, increasing the temperature until a shell of hydrogen around the now exhausted core becomes hot enough to take up the job of the core and begins fusing hydrogen to helium. This new energy source pushes the outer layers of the star back out causing it to swell to thousands of times its previous size. Meanwhile, the hotter temperature to ignite this form of fusion will mean that the star will give off 1,000 to 10,000 times as much light overall, but since this energy is spread out over such a large surface area, the star will appear red, hence the name.

So this is a red giant: A dying star that is swollen up and very bright.

Now to take a look at the other half of the equation, namely, what determines the habitability of a planet? Since these sci-fi stories inevitably have humans walking around on the surface, there’s some pretty strict criteria this will have to follow.

First off, the temperature must be not to hot and not to cold. In other words, the planet must be in the Habitable zone also known as the “Goldilocks zone”. This is generally a pretty good sized swath of celestial real estate. In our own solar system, it extends from roughly the orbit of Venus to the orbit of Mars. But what makes Mars and Venus inhospitable and Earth relatively cozy is our atmosphere. Unlike Mars, it’s thick enough to keep much of the heat we receive from the sun, but not too much of it like Venus.

This diagram shows the distances of the planets in the Solar System (upper row) and in the Gliese 581 system (lower row), from their respective stars (left). The habitable zone is indicated as the blue area, showing that Gliese 581 d is located inside the habitable zone around its low-mass red star. Based on a diagram by Franck Selsis, Univ. of Bordeaux. Credit: ESO

The atmosphere is crucial in other ways too. Obviously it’s what the intrepid explorers are going to be breathing. If there’s too much CO2, it’s not only going to trap too much heat, but make it hard to breathe. Also, CO2 doesn’t block UV light from the Sun and cancer rates would go up. So we need an oxygen rich atmosphere, but not too oxygen rich or there won’t be enough greenhouse gasses to keep the planet warm.

The problem here is that oxygen rich atmospheres just don’t exist without some assistance. Oxygen is actually very reactive. It likes to form bonds, making it unavailable to be free in the atmosphere like we want. It forms things like H2O, CO2, oxides, etc… This is why Mars and Venus have virtually no free oxygen in their atmospheres. What little they do comes from UV light striking the atmosphere and causing the bonded forms to disassociate, temporarily freeing the oxygen.

Earth only has as much free oxygen as it does because of photosynthesis. This gives us another criteria that we’ll need to determine habitability: the ability to produce photosynthesis.

So let’s start putting this all together.

Firstly, the evolution of the star as it leaves the main sequence, swelling up as it becomes a red giant and getting brighter and hotter will mean that the “Goldilocks zone” will be sweeping outwards. Planets that were formerly habitable like the Earth will be roasted if they aren’t entirely swallowed by the Sun as it grows. Instead, the habitable zone will be further out, more where Jupiter is now.

However, even if a planet were in this new habitable zone, this doesn’t mean its habitable under the condition that it also have an oxygen rich atmosphere. For that, we need to convert the atmosphere from an oxygen starved one, to an oxygen rich one via photosynthesis.

So the question is how quickly can this occur? Too slow and the habitable zone may have already swept by or the star may have run out of hydrogen in the shell and started contracting again only to ignite helium fusion in the core, once again freezing the planet.

The only example we have so far is on our own planet. For the first three billion years of life, there was little free oxygen until photosynthetic organisms arose and started converting it to levels near that of today. However, this process took several hundred million years. While this could probably be increased by an order of magnitude to tens of millions of years with genetically engineered bacteria seeded on the planet, we still need to make sure the timescales will work out.

It turns out the timescales will be different for different masses of stars. More massive stars burn through their fuel faster and will thus be shorter. For stars like the Sun, the red giant phase can last about 1.5 billion years, so ~100x longer than is necessary to develop an oxygen rich atmosphere. For stars twice as massive as the Sun, that timescale drops to a mere 40 million years, approaching the lower limit of what we’ll need. More massive stars will evolve even more quickly. So for this to be plausible, we’ll need lower mass stars that evolve slower. A rough upper limit here would be a two solar mass star.

However, there’s one more effect we need to worry about: Can we have enough CO2 in the atmosphere to even have photosynthesis? While not nearly as reactive as oxygen, carbon dioxide is also subject to being removed from the atmosphere. This is due to effects like silicate weathering such as CO2 + CaSiO3 –> CaCO3 + SiO2. While these effects are slow they build up with geological timescales. This means we can’t have old planets since they would have had all their free CO2 locked away into the surface. This balance was explored in a paper published in 2009 and determined that, for an Earth mass planet, the free CO2 would be exhausted long before the parent star even reached the red giant phase!

So we’re required to have low mass stars that evolve slowly to have enough time to develop the right atmosphere, but if they evolve that slowly, then there’s not enough CO2 left to get the atmosphere anyway! We’re stuck with a real Catch 22. The only way to make this feasible again is to find a way to introduce sufficient amounts of new CO2 into the atmosphere just as the habitable zone starts sweeping by.

Fortunately, there are some pretty large repositories of CO2 just flying around! Comets are composed mostly of frozen carbon monoxide and carbon dioxide. Crashing a few of them into a planet would introduce sufficient CO2 to potentially get photosynthesis started (once the dust settled down). Do that a few hundred thousand years before the planet would enter the habitable zone, wait ten million years, and then the planet could potentially be habitable for as much as an additional billion years more.

Ultimately this scenario would be plausible, but not exactly a good personal investment since you’d be dead long before you’d be able to reap the benefits. A long term strategy for the survival of a space faring species perhaps, but not a quick fix to toss down colonies and outposts.