Watch Kepler Press Conference Today Live On Universe Today

Get the news of the latest findings regarding stars and their structures during a press conference that will be streamed live from Aarhus University in Denmark today at 11 am EDT (1500 GMT). Using data from NASA’s Kepler spacecraft, an international research team has examined and characterized thousands of stars by using the natural pulse of stellar light waves, thereby gaining new insights into stellar structure and
evolution.

Interstellar Scintilation

Barnard 68 (Credit: ESO)
Barnard 68 (Credit: ESO)

[/caption]

Anyone who has looked at stars in the night sky (especially ones low on the horizon) has undoubtedly seen the common effect of twinkling. This effect is caused by turbulence in the atmosphere as small over densities cause the path of the light to bend ever so slightly. Often, vivid color shifts occur since the effects are wavelength dependent. All of this happens in the short distance between the edge of the atmosphere and our eyes. Yet often times, giant molecular clouds lie between our detectors and a star. Could these clouds of gas and dust cause a twinkling effect as well?


In theory, there’s no reason they shouldn’t. As the giant molecular clouds intercepting the incoming starlight move and distort, so too should the path of the light. The difference is that, due to the extremely low density and extremely large size, the timescales over which this distortion would take place would be far longer. Should it be discovered, it would provide astronomers another method by which to discover previously hidden gas.

Doing this is precisely the goals of a team of astronomers working from the Paris University and Sharif University in Iran. To get and understanding of what to expect, the team first simulated the effect, taking into account the properties of the cloud (distribution, velocity, etc…) as well as refraction and reflection. They estimated that, for a star in the Large Magellanic Cloud with light passing through typical galactic H2 gas, this would produce twinkles with changes taking around 24 minutes.

Yet there are many other effects which can produce modulations on the same timescale such as variable stars. Additional constraints would be necessary to claim that a change would be due to a twinkling effect and not a product of the star itself. As stated before, the effect is different for different wavelengths which would produce a “variation of the characteristic time scale … between the red side of the optical spectrum and the blue side.”

With expectations in hand, the team began searching for this effect in areas of the sky in which they knew especially high densities of gas to exist. Thus, they pointed their telescopes towards dense nebulae known as Bok globules like Barnard 68 (pictured above). Observations were taken using the 3.6 meter ESO NTT-SOFI telescope since it had the capabilities to also take infrared images and better explore the potential effects on the red side of the spectrum.

From their observations over two nights, the team discovered one instance in which the modulation of brightness in the different wavelengths followed the predicted effects. However, they note that from a single observation of their effects, it does not conclusively demonstrate the principle. The team also observed stars in the direction of the Small Magellanic Cloud to attempt to observe this twinkling effect in that direction due to previously undetected clouds along the line of sight. In this attempt, they were unsuccessful. Further similar observations along these lines in the future could help to constrain the amount of cold gas within the galaxy.

The Hunt for Young Exoplanets

While there is a great deal of excitement and effort in the hopes of finding small, terrestrial sized exoplanets, another realm of exoplanet discovery that is often overlooked is that of ones of differing ages to explore how planetary systems can evolve. The first discovered exoplanet orbited a pulsar, showing that planets can be hardy enough to survive the potential violent deaths of their parent stars. On the other end, young planets can help astronomers constrain how planets form and a potential new discovery may help in those regards.


Historically, astronomers have often avoided looking at stars younger than about 100 million years. Their young nature tends to make them unruly. They are prone to flares and other eccentric behaviors that often make observations messy. Additionally, many young stars often retain debris disks or are still embedded in the nebula in which they formed which also obscures observations.

Despite this, some astronomers have begun developing targeted searches for young exoplanets. The age of the exoplanet is not independently derived, but instead, taken from the age of the host star. This too can be difficult to determine. For isolated stars, there are precious few methods (such as gyrochronology) and they generally have large errors associated with them. Thus, instead of looking for isolated stars, astronomers searching for young exoplanets have tended to focus on clusters which can be dated more easily using the main sequence turn off method.

Through this methodology, astronomers have searched clusters and other groups, such as Beta Pictoris which turned up a planet earlier this year. The Beta Pic moving group boasts an age of ~12 million years making it one of the youngest associations currently known.

Trumpler 37 (also known as IC 1396 and the Elephant Trunk Nebula) is one of the few clusters with an even younger age of 1-5 million years. This was one of several young clusters observed by a team of German astronomers led by Gracjan Maciejewski of Jena University. The group utilized an array of telescopes across the world to continuously monitor Trumpler 37 for several weeks. During that time, they discovered numerous flares and variable stars, as well as a star with a dip in its brightness that could be a planet.

The team cautions that the detection may not be a planet. Several objects can mimic planetary transit lightcurves such as “the central transit of a low-mass star in front of a large main-sequence star or red giant, grazing eclipses in systems consisting of two main-sequence stars and a contamination of a fainter eclipsing binary along the same line of sight.” Due to the physics of small objects, the size of brown dwarfs and many Jovian type planets are similar leading difficulty in distinguishing from the light curve alone. Spectroscopic results will have to be undertaken to confirm the object truly is a planet.

However, assuming it is, based on the size of the dip in brightness, the team predicts the planet is about twice the radius of Jupiter, and about 15 times the mass. If so, this would be in good agreement with models of planetary formation for the expected age. Ultimately, planets of such age will help test our understanding of how planets form, whether it be from a single gravitational collapse early on, or slow accretion over time.

Breaking News: The Sun Worked 175 Years Ago!

The sunspot butterfly diagram. This modern version is constructed (and regularly updated) by the solar group at NASA Marshall Space Flight Center.
The sunspot butterfly diagram. This modern version is constructed (and regularly updated) by the solar group at NASA Marshall Space Flight Center.

[/caption]

You’ll have to forgive my title. After writing so many articles as moderately as I could, I couldn’t help but engage in a bit of sensationalism of my own, especially in the interest of sarcasm. Although it’s not especially exciting that the sun has indeed been working for nearly two centuries (indeed, much longer than that), what is interesting is how using historical data, scientists have confirmed that process we see today have been relatively consistent since 1825.


The observations revolve around a familiar diagram known as the Butterfly diagram (pictured above). This diagram depicts the position of sunspots at various latitudes on the sun’s surface as time progresses. At the beginning of a cycle, sunspots start of at high latitudes and as the cycle progresses, appear at lower and lower latitudes until they disappear and the cycle repeats. The pattern formed resembles the wings of a butterfly, thereby giving the diagram its name.

Although sunspots have been observed as far back as 364 BC by Chinese astronomers, telescopic observations of them did not start until the early 1600’s. Continuous observation of the sun and its spots started in 1876 at the Royal Greenwich Observatory. There Edward Maunder recognized the pattern of sunspots and published them in the format that is the now famous Butterfly diagram in 1904. The diagram, as its usually shown only comprises data starting from around 1876 and continuing until present day. But the use of new records have extended the diagram back an additional 51 years, covering four new solar cycles. Although many observations exist with total sunspot counts, this new set of data includes detailed documentation of the position of the spots on the solar disc.

The observations were created by German astronomer Heinrich Schwabe. Originally an apothecary, he won a telescope in a lottery in 1825 and was fascinated, selling his family business four years later. Schwabe observed the Sun compulsively attempting to discover a new planet with an orbit interior to Mercury by witnessing it transiting the Sun. Although this effort was doomed to failure, Schwabe maintained detailed records of the sunspots. He even recognized the pattern of spots occurred in an 11 year cycle and published the discovery in 1843. It was met with little attention for several years until it was included in Alexander von Humboldt’s Kosmos. Due to this discovery, the 11 year solar cycle is also referred to as the Schwabe cycle.

From 1825 until 1867, Schwabe compiled at least 8468 observations of the Sun’s disc, drawn on 5cm circles. On his death, these documents, as well as the rest of his scientific works, were donated to the Royal Astronomical Society of London, and in 2009, were provided to a team of researchers for digitization. From the 8468 drawings, 7299 “have a coordinate system which is found to be aligned with the celestial equator” making them suitable for translation into scientific data.

Thus far, the team has converted 11% of the images into usable data and already, it has created a detailed butterfly diagram preceding those produced elsewhere. From it, the astronomers undertaking the conversion have made some interesting observations. The cycle beginning around 1834 was weaker than others around that time. The following one, starting around 1845, displayed a notable asymmetry where sunspots in the southern hemisphere were conspicuously lacking for the first 1-2 years of the cycle, whereas most cycles are fairly well mirrored. Although unusual, such phase shifts are not unprecedented. In fact, another study using historical records has demonstrated that, for the last 300 years, one hemisphere has always led (although not usually so greatly) for several cycles before trading off.

As with the recently discussed historical project on weather trends this reanalysis of historical data is one of many such projects giving us a broader picture of the trends we see today and how they have changed over time. While undoubtedly, many will be demonstrated to be mundane and familiar, undeserving of the exaggerated significance of my title, this is how science works: by expanding our knowledge to test our expectations.

NOTE: I’d Emailed the team asking for permission to show their image of the historical butterfly diagram, but since I haven’t gotten permission, I didn’t reproduce it here. But you can still view it in the paper. Go do so. It’s awesomely familiar.

Virtual Observatory Discovers New Cataclysmic Variable

Simulation of Intermediate Polar CV star
Simulation of Intermediate Polar CV star (Dr Andy Beardmore, Keele University)

[/caption]

In my article two weeks ago, I discussed how data mining large surveys through online observatories would lead to new discoveries. Sure enough, a pair of astronomers, Ivan Zolotukhin and Igor Chilingarian using data from the Virtual Observatory, has announced the discovery of a cataclysmic variable (CV).


Cataclysmic variables are often called “novae”. However, they’re not a single star. These stars are actually binary systems in which their interactions cause large increases in brightness as matter is accreted from a secondary (usually post main-sequence) star, onto a white dwarf. The accretion of matter piles up on the surface until the it reaches a critical density and undergoes a brief but intense phase of fusion increasing the brightness of the star considerably. Unlike type Ia supernovae, this explosion doesn’t meet the critical density required to cause a core collapse.

The team began by considering a list of 107 objects from the Galactic Plane Survey conducted by the Advanced Satellite for Cosmology and Astrophysics (ASCA, a Japanese satellite operating in the x-ray regime). These objects were exceptional x-ray emitters that had not yet been classified. While other astronomers have done targeted investigations of individual objects requiring new telescope time, this team attempted to determine whether any of the odd objects were CVs using readily available data from the Virtual Observatory.

Since the objects were all strong x-ray sources, they all met at least one criteria of being a CV. Another was that CV stars often are strong emitters for Hα since the eruptions often eject hot hydrogen gas. To analyze whether or not any of the objects were emitters in this regime, the astronomers cross referenced the list of objects with data from the Isaac Newton Telescope Photometric Hα Survey of the northern Galactic plane (IPHAS) using a color-color diagram. In the field of view of the IPHAS survey that overlapped with the region from the ASCA image for one of the objects, the team found an object that emitted strongly in the Hα. But in such a dense field and with such different wavelength regimes, it was difficult to identify the objects as the same one.

To assist in determining if the two interesting objects were indeed the same, or whether they just happened to lie nearby, the pair turned to data from Chandra. Since Chandra has much smaller uncertainty in the positioning (0.6 arcsecs), the pair was able to identify the object and determine that the interesting object from IPHAS was indeed the same one from the ASCA survey.

Thus, the object passed the two tests the team had devised for finding cataclysmic variables. At this point, followup observation was warranted. The astronomers used the 3.5-m Calar Alto telescope to conduct spectroscopic observations and confirmed that the star was indeed a CV. In particular, it looked to be a subclass in which the primary white dwarf star had a strong enough magnetic field to disrupt the accretion disk and the point of contact is actually over the poles of the star (this is known as a intermediate polar CV).

This discovery is an example of how discoveries are just waiting to happen with data that’s already available and sitting in archives, waiting to be explored. Much of this data is even available to the public and can be mined by anyone with the proper computer programs and know-how. Undoubtedly, as organization of these storehouses of data becomes organized in more user friendly manners, additional discoveries will be made in such a manner.

The Tug of Exoplanets on Exoplanets

Earlier this year, I wrote about how an apparent change in the orbital characteristics of a planet around TrES-2b may be indicative of a new planet, much in the same way perturbations of Uranus revealed the presence of Neptune. A follow up study was conducted by astronomers at the University of Arizona and another study on planet WASP-3b also enters the fray.

The new study by the University of Arizona team, observed the TrES-2b planet on June 15, 2009, just seven orbits after the observations reported by Mislis et al. that reported the change in orbit. The findings of Mislis et al. were that, not only was the onset of the transit offset, but the angle of inclination was slowly changing. Yet the Arizona team found their results matched the previous data sets and found no indication of either of these effects (within error) when compared to the timing predictions from other, previous studies.

Additionally, an unrelated study led by Ronald Gilliland of the Space Telescope Science Institute discussing various sampling modes of the Kepler telescope used the TrES-2b system as an example and had coincidentally preceded and overlapped on of the observations made by Mislis et al. This study too found no variation in orbital characteristics of the planet.

Another test they applied to determine if the orbit was changing was the depth of the eclipse. Mislis’ team predicted that the trend would slowly cause the plane of the orbit to change such that, eventually, the planet would no longer eclipse the star. But before that happened, there should be a period of time where the area blocked by the planet was covering less and less of the star. If that were to happen, the amount of light blocked would decrease as well until it vanished all together. The Arizona team compared the depth of the eclipses they observed with the earlier observations and found that they observed no change here either.

So what went wrong with the data from Mislis et al.? One possibility is that they did not properly account for differences in their filter when compared with that of the original observations by which the transit timing was determined. Stars have a feature known as limb darkening in which the edges appear darker due to the angle at which light is being released. Some light is scattered in the atmosphere of the star and since the scattering is wavelength dependent, so too is the effects of the limb darkening. If a photometric filter is observing in a slightly different part of the spectrum, it would read the effects differently.

While these findings have discredited the notion that there are perturbations in the TrES-2b system, the notion that we can find exoplanets by their effects on known ones is still an attractive one that other astronomers are considering. One team, lead by G. Maciejewski has launched an international observing campaign to discover new planets by just this method. The campaign uses a series of telescopes ranging from 0.6 – 2.2 meters located around the world to frequently monitor stars with known transiting planets. And this study may have just had its first success.

In a paper recently uploaded to arXiv, the team announced that variations in the timing of transits for planet WASP-3b indicate the presence of a 15 Earth mass planet in a 2:1 orbital resonance with the known one. Currently, the team is working to make followup observations of their own including radial velocity measurements with the Hobby-Eberly Telescope owned by the University of Texas, Austin. With any luck, this new method will begin to discover new planets.

UPDATE: It looks like Maciejewski’s team has announced another potential planet through timing variations. This time around WASP-10.

Stolen: Magellanic Clouds – Return to Andromeda

The Magellanic Clouds are an oddity. Their relative velocity is suspiciously close to the escape velocity of the Milky Way system making it somewhat difficult for them to have been formed as part of the system. Additionally, their direction of motion is nearly perpendicular to the disk of the galaxy and systems, especially ones as large as the Magellanic Clouds, should show more orientation to the plane if they formed along side. Their gas content is also notably different than other satellite galaxies of our galaxy. The combination of these features suggests to some, that the Magellanic Clouds aren’t native to the Milky Way and were instead intercepted.

But where did they come from? Although the suggestion is not entirely new, a recent paper, accepted to the Astrophysical Journal Letters, suggests they may have been captured after a past merger in the Andromeda Galaxy (M31).

To analyze this proposition, the researchers, Yang (from the Chinese Academy of Sciences) and Hammers (of the University of Paris, Diderot), conducted simulations backtracking the positions of the Magellanic Clouds. While this may sound straightforward, the process is anything but. Since galaxies are extended objects, their three dimensional shapes and mass profiles must be worked out extremely well to truly account for the path of motion. Additionally, the Andromeda galaxy is certainly moving and would have been in a different position that it is observed today. But exactly where was it when the Magellanic Clouds would have been expelled? This is an important question, but not easy to answer given that observing the proper motions of objects so far away is difficult.

But wait. There’s more! As always, there’s a significant amount of the mass that can’t be seen at all! The presence and distribution of dark matter would greatly have affected the trajectory of the expelled galaxies. Fortunately, our own galaxy seems to be in a fairly quiescent phase and other studies have suggested that dark matter halos would be mostly spherical unless perturbed. Furthermore, distant galaxy clusters such as the Virgo supercluster as well as the “Great Attractor” would have also played into the trajectories.

These uncertainties take what would be a fairly simple problem and turn it into a case in which the researchers were instead forced to explore the parameter space with a range of reasonable inputs to see which values worked. In doing so, the pair of astronomers concluded “it could be the case, within a reasonable range of parameters for both the Milky Way and M31.” If so, the clouds spent 4 – 8 billion years flying across intergalactic space before being caught by our own galaxy.

But could there be further evidence to support this? The authors note that if Andromeda underwent a merger event of such scale would likely have induced vast amounts of star formation. As such, we should expect to see an increase in numbers of stars with this age. The authors do not make any statements as to whether or not this is the case. Regardless, the hypothesis is interesting and reminds us how dynamic our universe can be.

The Habitability of Gliese 581d

The Gliese 581 system has been making headlines recently for the most newly announced planet that may lie in the habitable zone. Hopes were somewhat dashed when we were reminded that the certainty level of its discovery was only 3 sigma (95%, whereas most astronomical discoveries are at or above the 99% confidence level before major announcements), but the Gliese 581 system may yet have more surprises. When the second planet, Gliese 581d, was first discovered, it was placed outside of the expected habitable zone. But in 2009, reanalysis of the data refined the orbital parameters and moved the planet in, just to the edge of the habitable zone. Several authors have suggested that, with sufficient greenhouse gasses, this may push Gliese 581d into the habitable zone. A new paper to be published in an upcoming issue of Astronomy & Astrophysics simulates a wide range of conditions to explore just what characteristics would be required.

The team, led by Robin Wordsworth at the University of Paris, varied properties of the planet including surface gravity, albedo, and the composition of potential atmospheres. Additionally, the simulations were also run for a planet in a similar orbit around the sun (Gliese 581 is an M dwarf) to understand how the different distribution of energy could effect the atmosphere. The team discovered that, for atmospheres comprised primarily of CO2, the redder stars would warm the planet more than a solar type star due to the CO2 not being able to scatter the redder light as well, thus allowing more to reach the ground.

One of the potential roadblocks to warming the team considered was the formation of clouds. The team first considered CO2 clouds which would be likely towards the outer edges of the habitable zone and form on Mars. Since clouds tend to be reflective, they would counteract warming effects from incoming starlight and cool the planet. Again, due to the nature of the star, the redder light would mitigate this somewhat allowing more to penetrate a potential cloud deck.

Should some H2O be present its effects are mixed. While clouds and ice are both very reflective, which would decrease the amount of energy captured by a planet, water also absorbs well in the infrared region. As such, clouds of water vapor can trap heat radiating from the surface back into space, trapping it and resulting in an overall increase. The problem is getting clouds to form in the first place.

The inclusion of nitrogen gas (common in the atmospheres of planets in the solar system) had little effect on the simulations. The primary reason was the lack of absorption of redder light. In general, the inclusion only slightly changed the specific heat of the atmosphere and a broadening of the absorption lines of other gasses, allowing for a very minor ability to trap more heat. Given the team was looking for conservative estimates, they ultimately discounted nitrogen from their final considerations.

With the combination of all these considerations, the team found that even given the most unfavorable conditions of most variables, should the atmospheric pressure be sufficiently high, this would allow for the presence of liquid water on the surface of the planet, a key requirement for what scientists maintain is critical for abiogenesis. The favorable merging of characteristics other than pressure were also able to produce liquid water with pressures as low as 5 bars. The team also notes that other greenhouse gasses, such as methane, were excluded due to their rarity, but should the exist, the ability for liquid water would be improved further.

Ultimately, the simulation was only done as a one dimensional model which essentially considered a thin column of the atmosphere on the day side of the planet. The team suggests that, for a better understanding, three dimensional models would need to be created. In the future, they plan to use just such modeling which would allow for a better understanding of what was happening elsewhere on the planet. For example, should temperatures fall too quickly on the night side, this could lead to the condensation of the gasses necessary and put the atmosphere in an unstable state. Additionally, as we discover more transiting exoplanets and determine their atmospheric properties from transmission spectra, astronomers will better be able to constrain what typical atmospheres really look like.

Astronomy Without A Telescope – No Metal, No Planet

The spiral galaxy NGC 4565, considered a close analogue of the Milky Way and with distinctly dusty outer regions. Credit: ESO.

[/caption]

A Japanese team of astronomers have reported a strong correlation between the metallicity of dusty protoplanetary disks and their longevity. From this finding they propose that low metallicity stars are much less likely to have planets, including gas giants, due to the shorter lifetime of their protoplanetary disks.

As you are probably aware, ‘metal’ is astronomy-speak for anything higher up the periodic table than hydrogen and helium. The Milky Way has a metallicity gradient – where metallicity drops markedly the further out you go. In the extreme outer galaxy, about 18 kiloparsecs out from the centre, the metallicity of stars is only 10% that of the Sun (which is about 8 kiloparsecs – or around 25,000 light years – out from the centre).

This study compared young star clusters within stellar nurseries with relatively high metallicity (like the Orion nebula) against more distant clusters in the outer galaxy within low metallicity nurseries (like Digel Cloud 2).

The study’s conclusions are based on the assumption that the radiation output of stars with dense protoplanetary disks will have an excess of near and mid-infra red wavelengths. This is largely because the star heats its surrounding protoplanetary disk, making the disk radiate in infra-red.

The research team used the 8.2 metre Subaru Telescope and a procedure called JHK photometry to identify a measure they called ‘disk fraction’, representing the density of the protoplanetary disk (as determined by the excess of infra red radiation). They also used another established mass-luminosity relation measure to determine the age of the clusters.

Graphing disk fraction over age for populations of Sun-equivalent metallicity stars versus populations of low metallicity stars in the outer galaxy suggests that the protoplanetary disks of those low metallicity stars disperse much quicker.

Left image - The Subaru Telescope in Hawaii. Credit: NAOJ. Right image - the relationship between disk persistence for low metallicity stars (O/H = -0.7, red line) and stars with Sun-equivalent metallicity (O/H = 0, black line). The protoplanetary disks of low metal stars seem to disperse quickly, reducing the likelihood of planet formation. Credit: Yasui et al.

The authors suggest that the process of photoevaporation may underlie the shorter lifespan of low metal disks – where the impact of photons is sufficient to quickly disperse low atomic mass hydrogen and helium, while the presence of higher atomic weight metals may deflect those photons and hence sustain a protoplanetary disk over a longer period.

As the authors point out, the lower lifetime of low metallicity disks reduces the likelihood of planet formation. Although the authors steer clear of much more speculation, the implications of this relationship seem to be that, as well as expecting to find less planets around stars towards the outer edge of the galaxy – we might also expect to find less planets around any old Population II stars that would have also formed in environments of low metallicity.

Indeed, these findings suggest that planets, even gas giants, may have been exceedingly rare in the early universe – and have only become commonplace later in the universe’s evolution – after stellar nucleosynthesis processes had adequately seeded the cosmos with metals.

Further reading: Yasui, C., Kobayashi, N., Tokunaga, A., Saito, M. and Tokoku, C.
Short Lifetime of Protoplanetary Disks in Low-Metallicity Environments

Probing Exoplanets

Sometimes topics segue perfectly. With the recent buzz about habitable planets, followed by the raining on the parade articles we’ve had about the not insignificant errors in the detections of planets around Gliese 581 as well as finding molecules in exoplanet atmospheres, it’s not been the best of times for finding life. But in a comment on my last article, Lawrence Crowell noted: “You can’t really know for sure whether a planet has life until you actually go there and look on the ground. This is not at all easy, and probably it is at best possible to send a probe within a 25 to 50 light year radius.”

This is right on the mark and happens to be another topic that’s been under some discussion on arXiv recently in a short series of paper and responses. The first paper, accepted to the journal Astrobiology and led by Jean Schneider of the Observatory of Paris-Meudon, seeks to describe “the far future of exoplanet direct characterization”. In general, this paper discusses where the study of exoplanets could go from our current knowledge base. It proposes two main directions: Finding more planets to better survey the parameter space planets inhabit, or more in depth, long-term studying of the planets we do know.

But perhaps the more interesting aspect of the paper, and the one that’s generated a rare response, is what can be done should we detect a planet with promising characteristics relatively nearby. They first propose trying to directly image the planet’s surface and calculate the diameter of a telescope capable of doing so would be roughly half as large as the sun. Instead, if we truly wish to get a direct image, the best bet would be to go there. They quickly address a few of the potential challenges.

The first is that of cosmic rays. These high energy particles can wreak havoc on electronics. The second is simple dust grains. The team calculates that an impact with “a 100 micron interstellar grain at 0.3 the speed of light has the same kinetic energy than a 100 ton body at 100 km/hour”. With present technology, any spacecraft equipped with sufficient shielding would be prohibitively massive and difficult to accelerate to the velocities necessary to make the trip worthwhile.

But Ian Crawford, of the University of London, thinks that the risk posed by such grains may be overstated. Firstly, Crawford believes Schneider’s requirement of 30% of the speed of light is somewhat overzealous. Instead, most proposals of interstellar travel by probes generally use a value of 10% of the speed of light. In particular, the most exhaustive proposal yet created, (the Daedalus project) only attempted to achieve a velocity of 0.12c. However, the ability to produce such a craft was well beyond the means at the time. But with the advent of miniaturization of many electronic components, the prospect may need to be reevaluated.

Aside from the overestimate on necessary velocities, Crawford suggests that Schneider’s team overstated the size of dust grains. In the solar neighborhood, dust grains are estimated to be nearly 100 times smaller than reported by Schneider’s team. The combination of the change in size estimation and that of velocity takes the energy released on collision from a whopping 4 x 107 Joules, to a mere 4.5 Joules. At absolute largest, recent studies have shown that the upper limit for dust particles is more in the range of 4.5 micrometers.

Lastly, Crawford suggests that there may be alternative ways to offer shielding than the brute force wall of mass. If a spacecraft were able to detect incoming particles using radar or another technique, it is possible that it could destroy the incoming particles using lasers, or deflect it using a electromagnetic field.

But Schneider wasn’t finished. He issued a response to Crawford’s response. In it, he criticizes Crawford’s optimistic vision of using nuclear or anti-matter propulsion systems. He notes that, thus far, nuclear propulsion has only been able to produce short impulses instead of continuous thrust and that, although some electronics have been miniaturized, the best analogue yet developed, the National Ignition Facility, is, “with all its control and cooling systems, is presently quite a non-miniaturized building.”

Anti-matter propulsion may be even more difficult. Currently, our ability to produce anti-matter is severely limited. Schneider estimates that it would take 200 terrawatts of energy to produce the required amounts. Meanwhile, the overall energy of the entire Earth is only 20 terrawatts.

In response to the charge of overestimation, Schneider notes that, although such large dust grains would be rare, but “even two lethal or severe collisions are prohibitory”, but does not go on to make any honest estimations of what the actual probability of such a collision would be.

Ultimately, Schneider concludes that all discussion is, at best, extremely preliminary. Before any such undertaking would be seriously considered, it would require “a precursor mission to secure the technological concept, including shielding mechanisms, at say 500 to 1000 Astronomical Units.” Ultimately, Schneider and his team seems to remind us that the technology is not yet there and that there are legitimate threats we must address. Crawford, on the other hand suggests that some of these challenges are ones that we may already be well on the road to addressing and constraining.