The Solar System Probably has Thousands of Captured Interstellar Asteroids

Artist’s impression of the first interstellar asteroid/comet, "Oumuamua". This unique object was discovered on 19 October 2017 by the Pan-STARRS 1 telescope in Hawaii. Credit: ESO/M. Kornmesser

On October 19th, 2017, the Panoramic Survey Telescope and Rapid Response System-1 (Pan-STARRS-1) in Hawaii announced the first-ever detection of an interstellar asteroid, named 1I/2017 U1 (aka. ‘Oumuamua). Originally thought to be a comet, this interstellar visitor quickly became the focus of follow-up studies that sought to determine its origin, structure, composition, and rule out the possibility that it was an alien spacecraft!

While ‘Oumuamua is the first known example of an interstellar asteroid reaching our Solar System, scientists have long suspected that such visitors are a regular occurrence. Aiming to determine just how common, a team of researchers from Harvard University conducted a study to measure the capture rate of interstellar asteroids and comets, and what role they may play in the spread of life throughout the Universe.

The study, titled “Implications of Captured Interstellar Objects for Panspermia and Extraterrestrial Life“, recently appeared online and is being considered for publication in The Astrophysical Journal. The study was conducted by Manasavi Lingam, a postdoc at the Harvard Institute for Theory and Computation (ITC), and Abraham Loeb, the chairman of the ITC and a researcher at the Harvard-Smithsonian Center for Astrophysics (CfA).

For the sake of their study, Lingam and Loeb constructed a three-body gravitational model, where the physics of three bodies are used to compute their respective trajectories and interactions with one another. In Lingam and Loeb’s model, Jupiter and the Sun served as the two massive bodies while a far less massive interstellar object served as the third. As Dr. Loeb explained to Universe Today via email:

“The combined gravity of the Sun and Jupiter acts as a ‘fishing net’. We suggest a new approach to searching for life, which is to examine the interstellar objects captured by this fishing net instead of the traditional approach of looking through telescope or traveling with spacecrafts to distant environments to do the same.”

Using this model, the pair then began calculating the rate at which objects comparable in size to ‘Oumuamua would be captured by the Solar System, and how often such objects would collide with the Earth over the course of its entire history. They also considered the Alpha Centauri system as a separate case for the sake of comparison. In this binary system, Alpha Centauri A and B serve as the two massive bodies and an interstellar asteroid as the third.

As Dr. Lingam indicated:

“The frequency of these objects is determined from the number density of such objects, which has been recently updated based on the discovery of ‘Oumuamua. The size distribution of these objects is unknown (and serves as a free parameter in our model), but for the sake of obtaining quantitative results, we assumed that it was similar to that of comets within our Solar System.”

The theory of Lithopanspermia states that life can be shared between planets within a planetary system. Credit: NASA

In the end, they determined that a few thousands captured objects might be found within the Solar system at any time – the largest of which would be tens of km in radius. For the Alpha Centauri system, the results were even more interesting. Based on the likely rate of capture, and the maximum size of a captured object, they determined that even Earth-sized objects could have been captured in the course of the system’s history.

In other words, Alpha Centauri may have picked up some rogue planets over time, which would have had drastic impact on the evolution  of the system. In this vein, the authors also explored how objects like ‘Oumuamua could have played a role in the distribution of life throughout the Universe via rocky bodies. This is a variation on the theory of lithopanspermia, where microbial life is shared between planets thanks to asteroids, comets and meteors.

In this scenario, interstellar asteroids, which originate in distant star systems, would be the be carriers of microbial life from one system to another. If such asteroids collided with Earth in the past, they could be responsible for seeding our planet and leading to the emergence of life as we know it. As Lingam explained:

“These interstellar objects could either crash directly into a planet and thus seed it with life, or be captured into the planetary system and undergo further collisions within that system to yield interplanetary panspermia (the second scenario is more likely when the captured object is large, for e.g. a fraction of the Earth’s radius).”

In addition, Lingam and Loeb offered suggestions on how future visitors to our Solar System could be studied. As Lingam summarized, the key would be to look for specific kinds of spectra from objects in our Solar Systems:

“It may be possible to look for interstellar objects (captured/unbound) in our Solar system by looking at their trajectories in detail. Alternatively, since many objects within the Solar system have similar ratios of oxygen isotopes, finding objects with very different isotopic ratios could indicate their interstellar origin. The isotope ratios can be determined through high-resolution spectroscopy if and when interstellar comets approach close to the Sun.”

“The simplest way to single out the objects who originated outside the Solar System, is to examine the abundance ratio of oxygen isotopes in the water vapor that makes their cometary tails,” added Loeb. “This can be done through high resolution spectroscopy. After identifying a trapped interstellar object, we could launch a probe that will search on its surface for signatures of primitive life or artifacts of a technological civilization.”

It would be no exaggeration to say that the discovery of ‘Oumuamua has set off something of a revolution in astronomy. In addition to validating something astronomers have long suspected, it has also provided new opportunities for research and the testing of scientific theories (such as lithopanspermia).

In the future, with any luck, robotic missions will be dispatched to these bodies to conduct direct studies and maybe even sample return missions. What these reveal about our Universe, and maybe even the spread of life throughout, is sure to be very illuminating!

Further Reading: arXiv

Mysterious Filament is Stretching Down Towards the Milky Way’s Supermassive Black Hole

A radio image from the NSF’s Karl G. Jansky Very Large Array showing the center of our galaxy. The mysterious radio filament is the curved line located near the center of the image, & the supermassive black hole Sagittarius A* (Sgr A*), is shown by the bright source near the bottom of the image. Credit: NSF/VLA/UCLA/M. Morris et al.

The core of the Milky Way Galaxy has always been a source of mystery and fascination to astronomers. This is due in part to the fact that our Solar System is embedded within the disk of the Milky Way – the flattened region that extends outwards from the core. This has made seeing into the bulge at the center of our galaxy rather difficult. Nevertheless, what we’ve been able to learn over the years has proven to be immensely interesting.

For instance, in the 1970s, astronomers became aware of the Supermassive Black Hole (SMBH) at the center of our galaxy, known as Sagittarius A* (Sgr A*). In 2016, astronomers also noticed a curved filament that appeared to be extending from Sgr A*. Using a pioneering technique, a team of astronomers from the Harvard-Smithsonian Center for Astrophysics (CfA) recently produced the highest-quality images of this structure to date.

The study which details their findings, titled “A Nonthermal Radio Filament Connected to the Galactic Black Hole?“, recently appeared in The Astrophysical Journal Letters. In it, the team describes how they used the National Radio Astronomy Observatory’s (NRAO) Very Large Array to investigate the non-thermal radio filament (NTF) near Sagittarius A* – now known as the Sgr A West Filament (SgrAWF).

Detection of an unusually bright X-Ray flare from Sagittarius A*, a supermassive black hole in the center of the Milky Way galaxy. Credit: NASA/CXC/Stanford/I. Zhuravleva et al.

As Mark Morris – a professor of astronomy at the UCLA and the lead authority the study – explained in a CfA press release:

“With our improved image, we can now follow this filament much closer to the Galaxy’s central black hole, and it is now close enough to indicate to us that it must originate there. However, we still have more work to do to find out what the true nature of this filament is.”

After examining the filament, the research team came up with three possible explanations for its existence. The first is that the filament is the result of inflowing gas, which would produce a rotating, vertical tower of magnetic field as it approaches and threads Sgr A*’s event horizon. Within this tower, particles would produce radio emissions as they are accelerated and spiral in around magnetic field lines extending from the black hole.

The second possibility is that the filament is a theoretical object known as a cosmic string. These are basically long, extremely thin cosmic structures that carry mass and electric currents that are hypothesized to migrate from the centers of galaxies. In this case, the string could have been captured by Sgr A* once it came too close and a portion crossed its event horizon.

The third and final possibility is that there is no real association between the filament and Sgr A* and the positioning and direction it has shown is merely coincidental. This would imply that there are many such filaments in the Universe and this one just happened to be found near the center of our galaxy. However, the team is confident that such a coincidence is highly unlikely.

Labelled image of the center of our galaxy, showing the mysterious radio filament & the supermassive black hole Sagittarius A* (Sgr A*). Credit: NSF/VLA/UCLA/M. Morris et al.

As Jun-Hui Zhao of the Harvard-Smithsonian Center for Astrophysics in Cambridge, and a co-author on the paper, said:

“Part of the thrill of science is stumbling across a mystery that is not easy to solve. While we don’t have the answer yet, the path to finding it is fascinating. This result is motivating astronomers to build next generation radio telescopes with cutting edge technology.”

All of these scenarios are currently being investigated, and each poses its own share of implications. If the first possibility is true – in which the filament is caused by particles being ejected by Sgr A* – then astronomers would be able to gleam vital information about how magnetic fields operate in such an environment. In short, it could show that near an SMBH, magnetic fields are orderly rather than chaotic.

This could be proven by examining particles farther away from Sgr A* to see if they are less energetic than those that are closer to it. The second possibility, the cosmic string theory, could be tested by conducting follow-up observations with the VLA to determine if the position of the filament is shifting and its particles are moving at a fraction of the speed of light.

If the latter should prove to be the case, it would constitute the first evidence that theoretical cosmic strings actually exists. It would also allow astronomers to conduct further tests of General Relativity, examining how gravity works under such conditions and how space-time is affected. The team also noted that, even if the filament is not physically connected to Sgr A*, the bend in the filament is still rather telling.

In short, the bend appears to be coincide with a shock wave, the kind that would be caused by an exploding star. This could mean that one of the massive stars which surrounds Sgr A* exploded in proximity to the filament in the past, producing the necessary shock wave that altered the course of the inflowing gas and its magnetic field. All of these mysteries will be the subject of follow-up surveys conducted with the VLA.

As co-author Miller Goss from the National Radio Astronomy Observatory in New Mexico (and a co-author on the study) said, “We will keep hunting until we have a solid explanation for this object. And we are aiming to next produce even better, more revealing images.”

Further Reading: CfA, AJL

There Could be Hundreds More Icy Worlds with Life Than on Rocky Planets Out There in the Galaxy

The moons of Europa and Enceladus, as imaged by the Galileo and Cassini spacecraft. Credit: NASA/ESA/JPL-Caltech/SETI Institute

In the hunt for extra-terrestrial life, scientists tend to take what is known as the “low-hanging fruit approach”. This consists of looking for conditions similar to what we experience here on Earth, which include at oxygen, organic molecules, and plenty of liquid water. Interestingly enough, some of the places where these ingredients are present in abundance include the interiors of icy moons like Europa, Ganymede, Enceladus and Titan.

Whereas there is only one terrestrial planet in our Solar System that is capable of supporting life (Earth), there are multiple “Ocean Worlds” like these moons. Taking this a step further, a team of researchers from the Harvard Smithsonian Center for Astrophysics (CfA) conducted a study that showed how potentially-habitable icy moons with interior oceans are far more likely than terrestrial planets in the Universe.

The study, titled “Subsurface Exolife“, was performed by Manasvi Lingam and Abraham Loeb of the Harvard Smithsonain Center for Astrophysics (CfA) and the Institute for Theory and Computation (ITC) at Harvard University. For the sake of their study, the authors consider all that what defines a circumstellar habitable zone (aka. “Goldilocks Zone“) and likelihood of there being life inside moons with interior oceans.

Cutaway showing the interior of Saturn’s moon Enceladus. Credit: ESA

To begin, Lingam and Loeb address the tendency to confuse habitable zones (HZs) with habitability, or to treat the two concepts as interchangeable. For instance, planets that are located within an HZ are not necessarily capable of supporting life – in this respect, Mars and Venus are perfect examples. Whereas Mars is too cold and it’s atmosphere too thin to support life, Venus suffered a runaway greenhouse effect that caused it to become a hot, hellish place.

On the other hand, bodies that are located beyond HZs have been found to be capable of having liquid water and the necessary ingredients to give rise to life. In this case, the moons of Europa, Ganymede, Enceladus, Dione, Titan, and several others serve as perfect examples. Thanks to the prevalence of water and geothermal heating caused by tidal forces, these moons all have interior oceans that could very well support life.

As Lingam, a post-doctoral researcher at the ITC and CfA and the lead author on the study, told Universe Today via email:

“The conventional notion of planetary habitability is the habitable zone (HZ), namely the concept that the “planet” must be situated at the right distance from the star such that it may be capable of having liquid water on its surface. However, this definition assumes that life is: (a) surface-based, (b) on a planet orbiting a star, and (c) based on liquid water (as the solvent) and carbon compounds. In contrast, our work relaxes assumptions (a) and (b), although we still retain (c).”

As such, Lingam and Loeb widen their consideration of habitability to include worlds that could have subsurface biospheres. Such environments go beyond icy moons such as Europa and Enceladus and could include many other types deep subterranean environments. On top of that, it has also been speculated that life could exist in Titan’s methane lakes (i.e. methanogenic organisms). However, Lingam and Loeb chose to focus on icy moons instead.

A “true color” image of the surface of Jupiter’s moon Europa as seen by the Galileo spacecraft. Image credit: NASA/JPL-Caltech/SETI Institute

“Even though we consider life in subsurface oceans under ice/rock envelopes, life could also exist in hydrated rocks (i.e. with water) beneath the surface; the latter is sometimes referred to as subterranean life,” said Lingam. “We did not delve into the second possibility since many of the conclusions (but not all of them) for subsurface oceans are also applicable to these worlds. Similarly, as noted above, we do not consider lifeforms based on exotic chemistries and solvents, since it is not easy to predict their properties.”

Ultimately, Lingam and Loeb chose to focus on worlds that would orbit stars and likely contain subsurface life humanity would be capable of recognizing. They then went about assessing the likelihood that such bodies are habitable, what advantages and challenges life will have to deal with in these environments, and the likelihood of such worlds existing beyond our Solar System (compared to potentially-habitable terrestrial planets).

For starters, “Ocean Worlds” have several advantages when it comes to supporting life. Within the Jovian system (Jupiter and its moons) radiation is a major problem, which is the result of charged particles becoming trapped in the gas giants powerful magnetic field. Between that and the moon’s tenuous atmospheres, life would have a very hard time surviving on the surface, but life dwelling beneath the ice would fare far better.

“One major advantage that icy worlds have is that the subsurface oceans are mostly sealed off from the surface,” said Lingam. “Hence, UV radiation and cosmic rays (energetic particles), which are typically detrimental to surface-based life in high doses, are unlikely to affect putative life in these subsurface oceans.”

Artist rendering showing an interior cross-section of the crust of Enceladus, which shows how hydrothermal activity may be causing the plumes of water at the moon’s surface. Credits: NASA-GSFC/SVS, NASA/JPL-Caltech/Southwest Research Institute

“On the negative side,’ he continued, “the absence of sunlight as a plentiful energy source could lead to a biosphere that has far less organisms (per unit volume) than Earth. In addition, most organisms in these biospheres are likely to be microbial, and the probability of complex life evolving may be low compared to Earth. Another issue is the potential availability of nutrients (e.g. phosphorus) necessary for life; we suggest that these nutrients might be available only in lower concentrations than Earth on these worlds.”

In the end, Lingam and Loeb determined that a wide range of worlds with ice shells of moderate thickness may exist in a wide range of habitats throughout the cosmos. Based on how statistically likely such worlds are, they concluded that “Ocean Worlds” like Europa, Enceladus, and others like them are about 1000 times more common than rocky planets that exist within the HZs of stars.

These findings have some drastic implications for the search for extra-terrestrial and extra-solar life. It also has significant implications for how life may be distributed through the Universe. As Lingam summarized:

“We conclude that life on these worlds will undoubtedly face noteworthy challenges. However, on the other hand, there is no definitive factor that prevents life (especially microbial life) from evolving on these planets and moons. In terms of panspermia, we considered the possibility that a free-floating planet containing subsurface exolife could be temporarily “captured” by a star, and that it may perhaps seed other planets (orbiting that star) with life. As there are many variables involved, not all of them can be quantified accurately.”

Exogenesis
A new instrument called the Search for Extra-Terrestrial Genomes (STEG)
is being developed to find evidence of life on other worlds. Credit: NASA/Jenny Mottor

Professor Leob – the Frank B. Baird Jr. Professor of Science at Harvard University, the director of the ITC, and the study’s co-author – added that finding examples of this life presents its own share of challenges. As he told Universe Today via email:

“It is very difficult to detect sub-surface life remotely (from a large distance) using telescopes. One could search for excess heat but that can result from natural sources, such as volcanos. The most reliable way to find sub-surface life is to land on such a planet or moon and drill through the surface ice sheet. This is the approach contemplated for a future NASA mission to Europa in the solar system.”

Exploring the implications for panspermia further, Lingam and Loeb also considered what might happen if a planet like Earth were ever ejected from the Solar System. As they note in their study, previous research has indicated how planets with thick atmospheres or subsurface oceans could still support life while floating in interstellar space. As Loeb explained, they also considered what would happen if this ever happened with Earth someday:

“An interesting question is what would happen to the Earth if it was ejected from the solar system into cold space without being warmed by the Sun. We have found that the oceans would freeze down to a depth of 4.4 kilometers but pockets of liquid water would survive in the deepest regions of the Earth’s ocean, such as the Mariana Trench, and life could survive in these remaining sub-surface lakes. This implies that sub-surface life could be transferred between planetary systems.”

The Drake Equation, a mathematical formula for the probability of finding life or advanced civilizations in the universe. Credit: University of Rochester

This study also serves as a reminder that as humanity explores more of the Solar System (largely for the sake of finding extra-terrestrial life) what we find also has implications in the hunt for life in the rest of the Universe. This is one of the benefits of the “low-hanging fruit” approach. What we don’t know is informed but what we do, and what we find helps inform our expectations of what else we might find.

And of course, it’s a very vast Universe out there. What we may find is likely to go far beyond what we are currently capable of recognizing!

Further Reading: arXiv

Astronomers Start Mapping the Structure of the Far Side of the Milky Way

Artist's impression of the spiral structure of the Milky Way with two major stellar arms and a bar. Credit: NASA/JPL-Caltech/ESO/R. Hurt

Since the 18th century, astronomers have been aware that our Solar System is embedded in a vast disk of stars and gas known as the Milky Way Galaxy. Since that time, the greatest scientific minds have been attempting to obtain accurate distance measurements in order to determine just how large the Milky Way is. This has been no easy task, since the fact that we are embedded in our galaxy’s disk means that we cannot view it head-on.

But thanks to a time-tested technique called trigonometric parallax, a team of astronomers from the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn, Germany, and the Harvard-Smithsonian Center for Astrophysics (CfA) were recently able to directly measure the distance to the opposite side of the Milky Way Galaxy. Aside from being an historic first, this feat has nearly doubled the previous record for distance measurements within our galaxy.

The study which described this accomplishment, titled “Mapping Spiral Structure on the far side of the Milky Way“, recently appeared in the journal Science. Led by Alberto Sanna, a researcher from the Max Planck Institute for Radio Astronomy, the team consulted data from the National Radio Astronomy Observatory’s Very Long Baseline Array (VLBA) to determine the distance to a star-forming region on the other side of our galaxy.

Artist’s view of the Milky Way with the location of the Sun and the star forming region at the opposite side in the Scutum-Centaurus spiral arm. Credit: Bill Saxton, NRAO/AUI/NSF; Robert Hurt, NASA.

To do this, the team relied on a technique first applied by Freidrich Wilhelm Bessel in 1838 to measure the distance to the star 61 Cygni. Known as trigonometric parallax, this technique involves viewing an object from opposite sides of the Earth’s orbit around the Sun, and then measuring the angle of the object’s apparent shift in position. In this way, astronomers are able to use simple trigonometry to calculate the distance to that object.

In short, the smaller the measured angle, the greater the distance to the object. These measurements were performed using data from the Bar and Spiral Structure Legacy (BeSSeL) Survey, which was named in honor of Freidrich Wilhelm Bessel. But whereas Bessel and his contemporaries were forced to measure parallax using basic instruments, the VLBA has ten dish antennas distributed across North America, Hawaii, and the Caribbean.

With such an array at its disposal, the VLBA is capable of measuring parallaxes with one thousand times the accuracy of those performed by astronomers in Bessel’s time. And rather than being confined to nearby star systems, the VLBA is capable of measuring the minuscule angles associated with vast cosmological distances. As Sanna explained in a recent MPIfR press release:

“Using the VLBA, we now can accurately map the whole extent of our Galaxy. Most of the stars and gas in our Galaxy are within this newly-measured distance from the Sun. With the VLBA, we now have the capability to measure enough distances to accurately trace the Galaxy’s spiral arms and learn their true shapes.”

With parallax technique, astronomers observe object at opposite ends of Earth’s orbit around the Sun to precisely measure its distance. Credit: Alexandra Angelich, NRAO/AUI/NSF.

The VLBA observations, which were conducted in 2014 and 2015, measured the distance to the star-forming region known as G007.47+00.05. Like all star-forming regions, this one contains molecules of water and methanol, which act as natural amplifiers of radio signals. This results in masers (the radio-wave equivalent of lasers), an effect that makes the radio signals appear bright and readily observable with radio telescopes.

This particular region is located over 66,000 light years from Earth and at on opposite side of the Milky Way, relative to our Solar System. The previous record for a parallax measurement was about 36,000 light-years, roughly 11,000 light years farther than the distance between our Solar System and the center of our galaxy. As Sanna explained, this accomplishment in radio astronomy will enable surveys that reach much farther than previous ones:

“Most of the stars and gas in our Galaxy are within this newly-measured distance from the Sun. With the VLBA, we now have the capability to measure enough distances to accurately trace the Galaxy’s spiral arms and learn their true shapes.”

Hundreds of star-forming regions exist within the Milky Way. But as Karl Menten – a member of the MPIfR and a co-author on the study – explained, this study was significant because of where this one is located. “So we have plenty of ‘mileposts’ to use for our mapping project,” he said. “But this one is special: Looking all the way through the Milky Way, past its center, way out into the other side.”

The band of light (the Milky Way) that is visible in the night sky, showing the stellar disk of our galaxy. Credit: Bob King

In the coming years, Sanna and his colleagues hope to conduct additional observations of G007.47+00.05 and other distant star-forming regions of the Milky Way. Ultimately, the goal is to gain a complete understanding of our galaxy, one that is so accurate that scientists will be able to finally place precise constraints on its size, mass, and its total number of stars.

With the necessary tools now in hand, Sanna and his team even estimate that a complete picture of the Milky Way could be available in about ten years time. Imagine that! Future generations will be able to study the Milky Way with the same ease as one that is located nearby, and which they can view edge-on. At long last, all those artist’s impression of our Milky Way will be to scale!

Further Reading: MPIfR, Science

New Study Proposes a Giant, Space-Based Solar Flare Shield for Earth

A massive prominence erupts from the surface of the sun. Credit: NASA Goddard Space Flight Center

In today’s modern, fast-paced world, human activity is very much reliant on electrical infrastructure. If the power grids go down, our climate control systems will shut off, our computers will die, and all electronic forms of commerce and communication will cease. But in addition to that, human activity in the 21st century is also becoming increasingly dependent upon the infrastructure located in Low Earth Orbit (LEO).

Aside from the many telecommunications satellites that are currently in space, there’s also the International Space Station and a fleet of GPS satellites. It is for this reason that solar flare activity is considered a serious hazard, and mitigation of it a priority. Looking to address that, a team of scientists from Harvard University recently released a study that proposes a bold solution – placing a giant magnetic shield in orbit.

The study – which was the work of Doctor Manasavi Lingam and Professor Abraham Loeb from the Harvard Smithsonian Center for Astrophysicist (CfA) – recently appeared online under the title “Impact and Mitigation Strategy for Future Solar Flares“. As they explain, solar flares pose a particularly grave risk in today’s world, and will become an even greater threat due to humanity’s growing presence in LEO.

Solar flares have been a going concern for over 150 years, ever since the famous Carrington Event of 1859. Since that time, a great deal of effort has been dedicated to the study of solar flares from both a theoretical and observational standpoint. And thanks to the advances that have been made in the past 200 years in terms of astronomy and space exploration, much has been learned about the phenomena known as “space weather”.

At the same time, humanity’s increased reliance on electricity and space-based infrastructure have also made us more vulnerable to extreme space weather events. In fact, if the Carrington event were to take place today, it is estimated that it would cause global damage to electric power grids, satellites communications, and global supply chains.

The cumulative worldwide economic losses, according to a 2009 report by the Space Studies Board (“Severe Space Weather Events–Understanding Societal and Economic Impacts”), would be $10 trillion, and recovery would take several years. And yet, as Professor Loeb explained to Universe Today via email, this threat from space has received far less attention than other possible threats.

“In terms of risk from the sky, most of the attention in the past was dedicated to asteroids,” said Loeb. “They killed the dinosaurs and their physical impact in the past was the same as it will be in the future, unless their orbits are deflected. However, solar flares have little biological impact and their main impact is on technology. But a century ago, there was not much technological infrastructure around, and technology is growing exponentially. Therefore, the damage is highly asymmetric between the past and future.”

Artist’s concept of a large asteroid passing by the Earth-Moon system. Credit: A combination of ESO/NASA images courtesy of Jason Major/Lights in the Dark.

To address this, Lingham and Loeb developed a simple mathematical model to assess the economic losses caused by solar flare activity over time. This model considered the increasing risk of damage to technological infrastructure based on two factors. For one, they considered the fact that the energy of a solar flares increases with time, then coupled this with the exponential growth of technology and GDP.

What they determined was that on longer time scales, the rare types of solar flares that are very powerful become much more likely. Coupled with humanity’s growing presence and dependence on spacecraft and satellites in LEO, this will add up to a dangerous conjunction somewhere down the road. Or as Loeb explained:

“We predict that within ~150 years, there will be an event that causes damage comparable to the current US GDP of ~20 trillion dollars, and the damage will increase exponentially at later times until technological development will saturate. Such a forecast was never attempted before. We also suggest a novel idea for how to reduce the damage from energetic particles by a magnetic shield. This was my idea and was not proposed before.”

To address this growing risk, Lingham and Loeb also considered the possibility of placing a magnetic shield between Earth and the Sun. This shield would be placed at the Earth-Sun Lagrange Point 1, where it would be able to deflect charged particles and create an artificial bowshock around Earth. In this sense, this shield would protect Earth’s in a way that is similar to what its magnetic field already does, but to greater effect.

Illustration of the proposed magnetic deflector placed at the Earth-Sun L1 Lagrange Point. Credit: Lingam and Loeb, 2017

Based on their assessment, Lingham and Loeb indicate that such a shield is technically feasible in terms of its basic physical parameters. They were also able to provide a rudimentary timeline for the construction of this shield, not to mention some rough cost assessments. As Loeb indicated, such a shield could be built before this century is over, and at a fraction of the cost of what would be incurred from solar flare damage.

“The engineering project associated with the magnetic shield that we propose could take a few decades to construct in space,” he said. “The cost for lifting the needed infrastructure to space (weighting 100,000 tons) will likely be of order 100 billions of dollars, much less than the expected damage over a century.”

Interestingly enough, the idea of using a magnetic shield to protect planets has been proposed before. For example, this type of shield was also the subject of a presentation at this year’s “Planetary Science Vision 2050 Workshop“, which was hosted by NASA’s Planetary Science Division (PSD). This shield was recommended as a means of enhancing Mars’ atmosphere and facilitating crewed mission to its surface in the future.

During the course of the presentation, titled “A Future Mars Environment for Science and Exploration“, NASA Director Jim Green discussed how a magnetic shield could protect Mars’ tenuous atmosphere from solar wind. This would allow it to replenish over time, which would have the added benefit of warming Mars up and allowing liquid water to again flow on its surface. If this sounds similar to proposals for terraforming Mars, that’s because it is!

Artist’s impression of a flaring red dwarf star, orbited by an exoplanet. Credit: NASA, ESA, and G. Bacon (STScI)

Beyond Earth and the Solar System, the implications for this study are quite overwhelming. In recent years, many terrestrial planets have been found orbiting within nearby M-type (aka. red dwarf) star systems. Because of the way these planets orbit closely to their respective suns, and the variable and unstable nature of M-type stars, scientists have expressed doubts about whether or not these planets could actually be habitable.

In short, scientists have ventured that over the course of billions of years, rocky planets that orbit close to their suns, are tidally-locked with them, and are subject to regular solar flares would lose their atmospheres. In this respect, magnetic shields could be a possible solution to creating extra-solar colonies. Place a large shield in orbit at the L1 Lagrange point, and you never have to worry again about powerful magnetic storms ravaging the planet!

On top of that, this study offers a possible resolution to the Fermi Paradox. When looking for sign of Extra-Terrestrial Intelligence (ETI), it might make sense to monitor distant stars for signs of an orbiting magnetic shield. As Prof. Leob explained, such structures may have already been detected around distant stars, and could explain some of the unusual observations astronomers have made:

“The imprint of a shield built by another civilization could involve the changes it induces in the brightness of the host star due to occultation (similar behavior to Tabby’s star)  if the structure is big enough. The situation could be similar to Dyson’s spheres, but instead of harvesting the energy of the star the purpose of the infrastructure is to protect a technological civilization on a planet from the flares of its host star.”
It is a foregone conclusion that as time and technology progress, humanity’s presence in (and reliance on) space will increase. As such, preparing for the most drastic space weather events the Solar System can throw at us just makes sense. And when it comes to the big questions like “are we alone in the Universe?”, it also makes sense to take our boldest concepts and proposals and consider how they might point the way towards extra-terrestrial intelligence.

Further Reading: arXiv

Determining the Mass of the Milky Way Using Hypervelocity Stars

An artist's conception of a hypervelocity star that has escaped the Milky Way. Credit: NASA

For centuries, astronomers have been looking beyond our Solar System to learn more about the Milky Way Galaxy. And yet, there are still many things about it that elude us, such as knowing its precise mass. Determining this is important to understanding the history of galaxy formation and the evolution of our Universe. As such, astronomers have attempted various techniques for measuring the true mass of the Milky Way.

So far, none of these methods have been particularly successful. However, a new study by a team of researchers from the Harvard-Smithsonian Center for Astrophysics proposed a new and interesting way to determine how much mass is in the Milky Way. By using hypervelocity stars (HVSs) that have been ejected from the center of the galaxy as a reference point, they claim that we can constrain the mass of our galaxy.

Their study, titled “Constraining Milky Way Mass with Hypervelocity Stars“, was recently published in the journal Astronomy and Astrophysics. The study was produced by Dr. Giacomo Fragione, an astrophysicist at the University of Rome, and Professor Abraham Loeb – the Frank B. Baird, Jr. Professor of Science, the Chair of the Astronomy Department, and the Director of the Institute for Theory and Computation at Harvard University.

Stars speeding through the Galaxy. Credit: ESA

To be clear, determining the mass of the Milky Way Galaxy is no simple task. On the one hand, observations are difficult because the Solar System lies deep within the disk of the galaxy itself. But at the same time, there’s also the mass of our galaxy’s dark matter halo, which is difficult to measure since it is not “luminous”, and therefore invisible to conventional methods of detection.

Current estimates of the galaxy’s total mass are based on the motions of tidal streamers of gas and globular clusters, which are both influenced by the gravitational mass of the galaxy. But so far, these measurements have produced mass estimates that range from one to several trillion solar-masses. As Professor Loeb explained to Universe Today via email, precisely measuring the mass of the Milky Way is of great importance to astronomers:

“The Milky Way provides a laboratory for testing the standard cosmological model. This model predicts that the number of satellite galaxies of the Milky Way depends sensitively on its mass. When comparing the predictions to the census of known satellite galaxies, it is essential to know the Milky Way mass. Moreover, the total mass calibrates the amount of invisible (dark) matter and sets the depth of the gravitational potential well and implies how fast should stars move for them to escape to intergalactic space.”

For the sake of their study, Prof. Loeb and Dr. Fragione therefore chose to take a novel approach, which involved modeling the motions of HVSs to determine the mass of our galaxy. More than 20 HVSs have been discovered within our galaxy so far, which travel at speeds of up to 700 km/s (435 mi/s) and are located at distances of about 100 to 50,000 light-years from the galactic center.

Artist’s conception of a hyperveloctiy star heading out from a spiral galaxy (similar to the Milky Way) and moving into dark matter nearby. Credit: Ben Bromley, University of Utah

These stars are thought to have been ejected from the center of our galaxy thanks to the interactions of binary stars with the supermassive black hole (SMBH) at the center of our galaxy – aka. Sagittarius A*. While their exact cause is still the subject of debate, the orbits of HVSs can be calculated since they are completely determined by the gravitational field of the galaxy.

As they explain in their study, the researchers used the asymmetry in the radial velocity distribution of stars in the galactic halo to determine the galaxy’s gravitational potential. The velocity of these halo stars is dependent on the potential escape speed of HVSs, provided that the time it takes for the HVSs to complete a single orbit is shorter than the lifetime of the halo stars.

From this, they were able to discriminate between different models for the Milky Way and the gravitational force it exerts. By adopting the nominal travel time of these observed HVSs – which they calculated to about 330 million years, about the same as the average lifetime of halo stars – they were able to derive gravitational estimates for the Milky Way which allowed for estimates on its overall mass.

“By calibrating the minimum speed of unbound stars, we find that the Milky Way mass is in the range of 1.2-1.9 trillions solar masses,” said Loeb. While still subject to a range, this latest estimate is a significant improvement over previous estimates. What’s more, these estimates are consistent our current cosmological models that attempt to account for all visible matter in the Universe, as well as dark matter and dark energy – the Lambda-CDM model.

Distribution of dark matter when the Universe was about 3 billion years old, obtained from a numerical simulation of galaxy formation. Credit: VIRGO Consortium/Alexandre Amblard/ESA

“The inferred Milky Way mass is in the range expected within the standard cosmological model,” said Leob, “where the amount of dark matter is about five times larger than that of ordinary (luminous) matter.”

Based on this breakdown, it can be said that normal matter in our galaxy – i.e. stars, planets, dust and gas – accounts for between 240 and 380 billion Solar Masses. So not only does this latest study provide more precise mass constraints for our galaxy, it could also help us to determine exactly how many star systems are out there – current estimates say that the Milky Way has between 200 to 400 billion stars and 100 billion planets.

Beyond that, this study is also significant to the study of cosmic formation and evolution. By placing more precise estimates on our galaxy’s mass, ones which are consistent with the current breakdown of normal matter and dark matter, cosmologists will be able to construct more accurate accounts of how our Universe came to be. One step clsoer to understanding the Universe on the grandest of scales!

Further Reading: Harvard Smithsonian CfA, Astronomy and Astrophysics

New Study Says a Fast Radio Burst Happens Every Second in the Universe

An artist's impression of the cosmic web, the filamentary structure that fills the entire Universe. Credit: M. Weiss/CfA

When astronomers first noted the detection of a Fast Radio Burst (FRB) in 2007 (aka. the Lorimer Burst), they were both astounded and intrigued. This high-energy burst of radio pulses, which lasted only a few milliseconds, appeared to be coming from outside of our galaxy. Since that time, astronomers have found evidence of many FRBs in previously-recorded data, and are still speculating as to what causes them.

Thanks to subsequent discoveries and research, astronomers now know that FRBs are far more common than previously thought. In fact, according to a new study by a team of researchers from the Harvard-Smithsonian Center for Astrophysics (CfA), FRBs may occur once every second within the observable Universe. If true, FRBs could be a powerful tool for researching the origins and evolution of the cosmos.

The study, titled “A Fast Radio Burst Occurs Every Second throughout the Observable Universe“, recently appeared in The Astrophysical Journal Letters. The study was led by Anastasia Fialkov, a postdoc researcher and Fellow at the CfA’s Institute for Theory and Computation (ITC). She was joined by Professor Abraham Loeb, the director of the ITC and the Frank B. Baird, Jr. Professor of Science at Harvard.

As noted, FRBs have remained something of a mystery since they were first discovered. Not only do their causes remain unknown, but much about their true nature is still not understood. As Dr. Fialkov told Universe Today via email:

“FRBs (or fast radio bursts) are astrophysical signals of an undetermined nature. The observed bursts are short (or millisecond duration), bright pulses in the radio part of the electromagnetic spectrum (at GHz frequencies). Only 24 bursts have been observed so far and we still do not know for sure which physical processes trigger them. The most plausible explanation is that they are launched by rotating magnetized neutron stars. However, this theory is to be confirmed.”

For the sake of their study, Fialkov and Loeb relied on observations made by multiple telescopes of the repeating fast radio burst known as FRB 121102. This FRB was first observed in 2012 by researchers using the Arecibo radio telescope in Puerto Rico, and has since been confirmed to be coming from a galaxy located 3 billion light years away in the direction of the Auriga constellation.

Since it was discovered, additional bursts have been detected coming from its location, making FRB 121102 the only known example of a repeating FRB. This repetitive nature has also allowed astronomers to conduct more detailed studies of it than any other FRB. As Prof. Loeb told Universe Today via email, these and other reasons made it an ideal target for their study:

“FRB 121102 is the only FRB for which a host galaxy and a distance were identified. It is also the only repeating FRB source from which we detected hundreds of FRBs by now. The radio spectrum of its FRBs is centered on a characteristic frequency and not covering a very broad band. This has important implications for the detectability of such FRBs, because in order to find them the radio observatory needs to be tuned to their frequency.”

Image of the sky where the radio burst FRB 121102 was found, in the constellation Auriga. You can see its location with a green circle. At left is supernova remnant S147 and at right, a star formation area called IC 410. Credit: Rogelio Bernal Andreo (DeepSkyColors.com)

Based on what is known about FRB 121102, Fialkov and Loeb conducted a series of calculations that assumed that it’s behavior was representative of all FRBs. They then projected how many FRBs would exist across the entire sky and determined that within the observable Universe, a FRB would likely be taking place once every second. As Dr. Fialkov explained:

“Assuming that FRBs are produced by galaxies of a particular type (e.g., similar to FRB 121102) we can calculate how many FRBs have to be produced by each galaxy to explain the existing observations (i.e., 2000 per sky per day). With this number in mind we can infer the production rate for the entire population of galaxies. This calculation shows that an FRB occurs every second when accounting for all the faint events.”

While the exact nature and origins of FRBs are still unknown – suggestions include rotating neutron stars and even alien intelligence! – Fialkov and Loeb indicate that they could be used to study the structure and evolution of the Universe. If indeed they occur with such regular frequency throughout the cosmos, then more distant sources could act as probes which astronomers would then rely on to plumb the depths of space.

For instance, over vast cosmic distances, there is a significant amount of intervening material that makes it difficult for astronomers to study the Cosmic Microwave Background (CMB) – the leftover radiation from the Big Bang. Studies of this intervening material could lead to a new estimates of just how dense space is – i.e. how much of it is composed of ordinary matter, dark matter, and dark energy – and how rapidly it is expanding.

Gemini composite image of the field around FRB 121102, the only repeating FRB discovered so far. Credit: Gemini Observatory/AURA/NSF/NRC

And as Prof. Loeb indicated, FRBs could also be used to explore enduring cosmlogical questions, like how the “Dark Age” of the Universe ended:

“FRBs can be used to measure the column of free electrons towards their source. This can be used to measure the density of ordinary matter between galaxies in the present-day universe. In addition, FRBs at early cosmic times can be used to find out when the ultraviolet light from the first stars broke up the primordial atoms of hydrogen left over from the Big Bang into their constituent electrons and protons.”

The “Dark Age”, which occurred between 380,000 and 150 million years after the Big Bang, was characterized by a “fog” of hydrogen atoms interacting with photons. As a result of this, the radiation of this period is undetectable by our current instruments. At present, scientists are still attempting to resolve how the Universe made the transition between these “Dark Ages” and subsequent epochs when the Universe was filled with light.

This period of “reionization”, which took place 150 million to 1 billion years after the Big Bang, was when the first stars and quasars formed. It is generally believed that UV light from the first stars in the Universe traveled outwards to ionize the hydrogen gas (thus clearing the fog). A recent study also suggested that black holes that existed in the early Universe created the necessary “winds” that allowed this ionizing radiation to escape.

To this end, FRBs could be used to probe into this early period of the Universe and determine what broke down this “fog” and allowed light to escape. Studying very distant FRBs could allow scientists to study where, when and how this process of “reionization” occurred. Looking ahead, Fialkov and Loeb explained how future radio telescopes will be able to discover many FRBs.

The planned Square Kilometer Array will be the world’s largest radio telescope when it begins operations in 2018. Credit: SKA

“Future radio observatories, like the Square Kilometer Array, will be sensitive enough to detect FRBs from the first generation of galaxies at the edge of the observable universe,” said Prof. Loeb. “Our work provides the first estimate of the number and properties of the first flashes of radio waves that lit up in the infant universe.”

And then there’s the Canadian Hydrogen Intensity Mapping Experiment (CHIME) at the at the Dominion Radio Astrophysical Observatory in British Columbia, which recently began operating. These and other instruments will serve as powerful tools for detecting FRBs, which in turn could be used to view previously unseen regions of time and space, and unlock some of the deepest cosmological mysteries.

“[W]e find that a next generation telescope (with a much better sensitivity than the existing ones) is expected to see many more FRBs than what is observed today,” said Dr. Fialkov. “This would allow to characterize the population of FRBs and identify their origin. Understanding the nature of FRBs will be a major breakthrough. Once the properties of these sources are known, FRBs can be used as cosmic beacons to explore the Universe. One application is to study the history of reionization (cosmic phase transition when the inter-galactic gas was ionized by stars).”

It is an inspired thought, using natural cosmic phenomena as research tools. In that respect, using FRBs to probe the most distant objects in space (and as far back in time as we can) is kind of like using quasars as navigational beacons. In the end, advancing our knowledge of the Universe allows us to explore more of it.

Further Reading: CfA, Astrophysical Journal Letters

Ultraviolet Light Could Point the Way To Life Throughout the Universe

Artist's impression of how the surface of a planet orbiting a red dwarf star may appear. The planet is in the habitable zone so liquid water exists. However, low levels of ultraviolet radiation from the star have prevented or severely impeded chemical processes thought to be required for life to emerge. This causes the planet to be devoid of life. Credit: M. Weiss/CfA

Ultraviolet light is what you might call a controversial type of radiation. On the one hand, overexposure can lead to sunburn, an increased risk of skin cancer, and damage to a person’s eyesight and immune system. On the other hand, it also has some tremendous health benefits, which includes promoting stress relief and stimulating the body’s natural production of vitamin D, seratonin, and melanin.

And according to a new study from a team from Harvard University and the Harvard-Smithsonian Center for Astrophysics (CfA), ultraviolet radiation may even have played a critical role in the emergence of life here on Earth. As such, determining how much UV radiation is produced by other types of stars could be one of the keys to finding evidence of life any planets that orbit them.

The study, titled “The Surface UV Environment on Planets Orbiting M Dwarfs: Implications for Prebiotic Chemistry and the Need for Experimental Follow-up“, recently appeared in The Astrophysical Journal. Led by Sukrit Ranjan, a visiting postdoctoral researcher at the CfA, the team focused on M-type (red dwarf) stars to determine if this class of star produces enough UV radiation to kick-start the biological processes necessary for life to emerge.

Artist’s impression of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri. The double star Alpha Centauri AB is visible to the upper right of Proxima itself. Credit: ESO

Recent studies have indicated that UV radiation may be necessary for the formation of ribonucleic acid (RNA), which is necessary for all forms of life as we know it. And given the rate at which rocky planets have been discovered around red dwarf stars of late (exampled include Proxima b, LHS 1140b, and the seven planets of the TRAPPIST-1 system), how much UV radiation red dwarfs give off could be central to determining exoplanet habitability.

As Dr. Ranjan explained in a CfA press release:

“It would be like having a pile of wood and kindling and wanting to light a fire, but not having a match. Our research shows that the right amount of UV light might be one of the matches that gets life as we know it to ignite.”

For the sake of their study, the team created radiative transfer models of red dwarf stars. They then sought to determine if the UV environment on prebiotic Earth-analog planets which orbited them would be sufficient to stimulate the photoprocesses that would lead to the formation of RNA. From this, they calculated that planets orbiting M-dwarf stars would have access to 100–1000 times less bioactive UV radiation than a young Earth.

As a result, the chemistry that depends on UV light to turn chemical elements and prebiotic conditions into biological organisms would likely shut down. Alternately, the team estimated that even if this chemistry was able to proceed under a diminished level of UV radiation, it would operate at a much slower rate than it did on Earth billions of years ago.

Artist’s impression of the planet orbiting a red dwarf star. Credit: ESO/M. Kornmesser

As Robin Wordsworth – an assistant professor at the Harvard School of Engineering and Applied Science and a co-author on the study – explained, this is not necessarily bad news as far as questions of habitability go. “It may be a matter of finding the sweet spot,” he said. “There needs to be enough ultraviolet light to trigger the formation of life, but not so much that it erodes and removes the planet’s atmosphere.”

Previous studies have shown that even calm red dwarfs experience dramatic flares that periodically bombard their planets with bursts UV energy. While this was considered to be something hazardous, which could strip orbiting planets of their atmospheres and irradiate life, it is possible that such flares could compensate for the lower levels of UV being steadily produced by the star.

This news also comes on the heels of a study that indicated how the outer planets of the TRAPPIST-1 system (including the three located within its habitable zone) might still have plenty of water of their surfaces. Here too, the key was UV radiation, where the team responsible for the study monitored the TRAPPIST-1 planets for signs of hydrogen loss from their atmospheres (a sign of photodissociation).

This research also calls to mind a recent study led by Professor Avi Loeb, the Chair of the astronomy department at Harvard University, Director of the Institute for Theory and Computation, and also a member of the CfA. Titled, “Relative Likelihood for Life as a Function of Cosmic Time“, Loeb and his team concluded that red dwarf stars are the most likely to give rise to life because of their low mass and extreme longevity.

Artist’s impression of a sunset seen from the surface of an Earth-like exoplanet. Credit: ESO/L. Calçada

Compared to higher-mass stars that have shorter life spans, red dwarf stars are likely to remain in their main sequence for as long as six to twelve trillion years. Hence, red dwarf stars would certainly be around long enough to accommodate even a vastly decelerated rate of organic evolution. In this respect, this latest study might even be considered a possible resolution for the Fermi Paradox – Where are all the aliens? They’re still evolving!

But as Dimitar Sasselov – the Phillips Professor of Astronomy at Harvard, the Director of the Origins of Life Initiative and a co-author on the paper – indicated, there are still many unanswered questions:

“We still have a lot of work to do in the laboratory and elsewhere to determine how factors, including UV, play into the question of life. Also, we need to determine whether life can form at much lower UV levels than we experience here on Earth.”

As always, scientists are forced to work with a limited frame of reference when it comes to assessing the habitability of other planets. To our knowledge, life exists on only on planet (i.e. Earth), which naturally influences our understanding of where and under what conditions life can thrive. And despite ongoing research, the question of how life emerged on Earth is still something of a mystery.

If life should be found on a planet orbiting a red dwarf, or in extreme environments we thought were uninhabitable, it would suggest that life can emerge and evolve in conditions that are very different from those of Earth. In the coming years, next-generation missions like the James Webb Space Telescope are the Giant Magellan Telescope are expected to reveal more about distant stars and their systems of planets.

The payoff of this research is likely to include new insights into where life can emerge and the conditions under which it can thrive.

Further Reading: CfA, The Astrophysical Journal

Even Though Red Dwarfs Have Long Lasting Habitable Zones, They’d be Brutal to Life

Artist's concept of the TRAPPIST-1 star system, an ultra-cool dwarf that has seven Earth-size planets orbiting it. We're going to keep finding more and more solar systemsl like this, but we need observatories like WFIRST, with starshades, to understand the planets better. Credits: NASA/JPL-Caltech
Artist's concept of the TRAPPIST-1 star system, an ultra-cool dwarf that has seven Earth-size planets orbiting it. We're going to keep finding more and more solar systemsl like this, but we need observatories like WFIRST, with starshades, to understand the planets better. Credits: NASA/JPL-Caltech

Ever since scientists confirmed the existence of seven terrestrial planets orbiting TRAPPIST-1, this system has been a focal point of interest for astronomers. Given its proximity to Earth (just 39.5 light-years light-years away), and the fact that three of its planets orbit within the star’s “Goldilocks Zone“, this system has been an ideal location for learning more about the potential habitability of red dwarf stars systems.

This is especially important since the majority of stars in our galaxy are red dwarfs (aka. M-type dwarf stars). Unfortunately, not all of the research has been reassuring. For example, two recent studies performed by two separate teams from the Harvard-Smithsonian Center for Astrophysics (CfA) indicate that the odds of finding life in this system are less likely than generally thought.

Continue reading “Even Though Red Dwarfs Have Long Lasting Habitable Zones, They’d be Brutal to Life”

Solar Probe Plus Will ‘Touch’ The Sun

NASA's Solar Probe Plus will enter the sun's corona to understand space weather using a Faraday cup developed by the Smithsonian Astrophysical Observatory and Draper. Credit: NASA/Johns Hopkins University Applied Physics Laboratory

Coronal Mass Ejections (aka. solar flares) are a seriously hazardous thing. Whenever the Sun emits a burst of these charged particles, it can play havoc with electrical systems, aircraft and satellites here on Earth. Worse yet is the harm it can inflict on astronauts stationed aboard the ISS, who do not have the protection of Earth’s atmosphere. As such, it is obvious why scientists want to be able to predict these events better.

For this reason, the Smithsonian Astrophysical Observatory and the Charles Stark Draper Laboratory – a Cambridge, Massachusetts-based non-profit engineering organization – are working to develop specialized sensors for NASA’s proposed solar spacecraft. Launching in 2018, this spacecraft will fly into the Sun atmosphere and “touch” the face of the Sun to learn more about its behavior.

This spacecraft – known as the Solar Probe Plus (SPP) – is currently being designed and built by the Johns Hopkins University Applied Physics Laboratory. Once it is launched, the SPP will use seven Venus flybys over nearly seven years to gradually shrink its orbit around the Sun. During this time, it will conduct 24 flybys of the Sun and pass into the Sun’s upper atmosphere (corona), passing within 6.4 million km (4 million mi) of its surface.

At this distance, it will have traveled 37.6 million km (23.36 million mi) closer to the Sun than any spacecraft in history. At the same time, it will set a new record for the fastest moving object ever built by human beings – traveling at speeds of up to 200 km/sec (124.27 mi/s). And last but not least, it will be exposed to heat and radiation that no spacecraft has ever faced, which will include temperatures in excess of 1371 °C (2500 °F).

As Seamus Tuohy, the Director of the Space Systems Program Office at Draper, said in a CfA press release:

“Such a mission would require a spacecraft and instrumentation capable of withstanding extremes of radiation, high velocity travel and the harsh solar condition—and that is the kind of program deeply familiar to Draper and the Smithsonian Astrophysical Observatory.”

In addition to being an historic first, this probe will provide new data on solar activity and help scientists develop ways of forecasting major space-weather events – which impact life on Earth. This is especially important in an age when people are increasingly reliant on technology that can be negatively impacted by solar flares – ranging from aircraft and satellites to appliances and electrical devices.

According to a recent study by the National Academy of Sciences, it is estimated that a huge solar event today could cause two trillion dollars in damage in the US alone – and places like the eastern seaboard would be without power for up to a year. Without electricity to provide heating, utilities, light, and air-conditioning, the death toll from such an event would be significant.

As such, developing advanced warning systems that could reliably predict when a coronal mass ejection is coming is not just a matter of preventing damage, but saving lives. As Justin C. Kasper, the principal investigator at the Smithsonian Astrophysical Observatory and a professor in space science at the University of Michigan, said:

“[I]n addition to answering fundamental science questions, the intent is to better understand the risks space weather poses to the modern communication, aviation and energy systems we all rely on. Many of the systems we in the modern world rely on—our telecommunications, GPS, satellites and power grids—could be disrupted for an extended period of time if a large solar storm were to happen today. Solar Probe Plus will help us predict and manage the impact of space weather on society.”

To this end, the SPP has three major scientific objectives. First, it will seek to trace the flow of energy that heats and accelerates the solar corona and solar wind. Second, its investigators will attempt to determine the structure and dynamics of plasma and magnetic fields as the source of solar wind. And last, it will explore the mechanisms that accelerate and transport energetic particles – specifically electrons, protons, and helium ions.

To do this, the SPP will be equipped with an advanced suite of instruments. One of the most important of these is the one built by the Smithsonian Astrophysical Observatory with technical support from Draper. Known as the Faraday Cup – and named after famous electromagnetic scientists Michael Faraday – this device will be operated by SAO and the University of Michigan in Ann Arbor.

Designed to withstand interference from electromagnetic radiation, the Farady Cup will measure the velocity and direction of the Sun’s charged particles, and will be only two positioned outside of the SPP’s protective sun shield – another crucial component. Measuring 11.43 cm (4.5 inches) thick, this carbon composition shield will ensure that the probe can withstand the extreme conditions as it conducts its many flybys through the Sun’s corona.

Naturally, the mission presents several challenges, not the least of which will be capturing data while operating within an extreme environment, and while traveling at extreme speeds. But the payoff is sure to be worth it. For years, astronomers have studied the Sun, but never from inside the Sun’s atmosphere.

By flying through the birthplace of the highest-energy solar particles, the SPP is set to advance our understanding of the Sun and the origin and evolution of the solar wind. This knowledge could not only help us avoid a natural catastrophe here on Earth, but help advance our long-term goal of exploring (and even colonizing) the Solar System.

Further Reading: CfA