For the First Time, Planets Have Been Discovered in ANOTHER Galaxy!

Using the microlensing metthod, a team of astrophysicists have found the first extra-galactic planets! Credit: NASA/Tim Pyle

The first confirmed discovery of a planet beyond our Solar System (aka. an Extrasolar Planet) was a groundbreaking event. And while the initial discoveries were made using only ground-based observatories, and were therefore few and far between, the study of exoplanets has grown considerably with the deployment of space-based telescopes like the Kepler space telescope.

As of February 1st, 2018, 3,728 planets have been confirmed in 2,794 systems, with 622 systems having more than one planet. But now, thanks to a new study by a team of astrophysicists from the University of Oklahoma, the first planets beyond our galaxy have been discovered! Using a technique predicting by Einstein’s Theory of General Relativity, this team found evidence of planets in a galaxy roughly 3.8 billion light years away.

The study which details their discovery, titled “Probing Planets in Extragalactic Galaxies Using Quasar Microlensing“, recently appeared in The Astrophysical Journal Letters. The study was conducted by Xinyu Dai and Eduardo Guerras, a postdoctoral researcher and professor from the Homer L. Dodge Department of Physics and Astronomy at the University of Oklahoma, respectively.

For the sake of their study, the pair used the Gravitational Microlensing technique, which relies on the gravitational force of distant objects to bend and focus light coming from a star. As a planet passes in front of the star relative to the observer (i.e. makes a transit), the light dips measurably, which can then be used to determine the presence of a planet.

In this respect, Gravitational Microlensing is a scaled-down version of Gravitational Lensing, where an intervening object (like a galaxy cluster) is used to focus light coming from a galaxy or other large object located beyond it. It also incorporates a key element of the highly-effective Transit Method, where stars are monitored for dips in brightness to indicate the presence of an exoplanet.

In addition to this method, which is the only one capable of detecting extra-solar planets at truly great distances (on the order of billions of light years), the team also used data from NASA’s Chandra X-ray Observatory to study a distant quasar known as RX J1131–1231. Specifically, the team relied on the microlensing properties of the supermassive black hole (SMBH) located at the center of RX J1131–1231.

They also relied on the OU Supercomputing Center for Education and Research to calculate the microlensing models they employed. From this, they observed line energy shifts that could only be explained by the presence of of about 2000 unbound planets between the quasar’s stars – which ranged from being as massive as the Moon to Jupiter – per main-sequence star.

Image of the gravitational lens RX J1131-1231 galaxy with the lens galaxy at the center and four lensed background quasars. It is estimated that there are trillions of planets in the center elliptical galaxy in this image. Credit: University of Oklahoma

As Xinyu Dai explained in a recent University of Oklahoma press release:

“We are very excited about this discovery. This is the first time anyone has discovered planets outside our galaxy. These small planets are the best candidate for the signature we observed in this study using the microlensing technique. We analyzed the high frequency of the signature by modeling the data to determine the mass.”

While 53 planets have been discovered within the Milky Way galaxy using the Microlensing technique, this is the first time that planets have been observed in other galaxies. Much like the first confirmed discovery of an extra-solar planet, scientists were not even certain planets existed in other galaxies prior to this study. This discovery has therefore brought the study of planets beyond our Solar System to a whole new level!

And as Eduardo Guerras indicated, the discovery was possible thanks to improvements made in both modelling and instrumentation in recent years:

“This is an example of how powerful the techniques of analysis of extragalactic microlensing can be. This galaxy is located 3.8 billion light years away, and there is not the slightest chance of observing these planets directly, not even with the best telescope one can imagine in a science fiction scenario. However, we are able to study them, unveil their presence and even have an idea of their masses. This is very cool science.”

In the future, exoplanet discoveries are likely to be made within and beyond the Milky Way Galaxy. Credit: NASA

In the coming years, more sophisticated observatories will be available, which will allow for even more in the way of discoveries. These include space-based instruments like the James Webb Space Telescope (which is scheduled to launch in Spring of 2019) and ground-based observatories like the ESO’s OverWhelmingly Large (OWL) Telescope, the Very Large Telescope (VLT), the Extremely Large Telescope (ELT), and the Colossus Telescope.

At this juncture, the odds are good that some of these discoveries will be in neighboring galaxies. Perhaps then we can begin to determine just how common planets are in our Universe. At present, it is estimated that could be as many as 100 billion planets in the Milky Way Galaxy alone! But with an estimated 1 to 2 trillion galaxies in the Universe… well, you do the math!

Further Reading: University of Oklahoma, The Astrophysical Journal Letters

Perhaps the Best Part of Electron’s Successful Launch was its Payload: the Humanity Star

Peter Beck, founder of Rocket Lab, is shown with the Humanity Star. Credit: Rocket Lab

This past weekend, the New Zealand-based aerospace company Rocket Lab reached another milestone. On Sunday, January 21st, the company conducted the second launch – the first having taken place this past summer – of its Electron booster. This two-stage, lightweight rocket is central to the company’s vision of reducing the costs of individual launches by sending light payloads to orbit with regular frequency.

This mission was also important because it was the first time that the company sent payloads into orbit. In addition to several commercial payloads, the launch also sent a secret payload into orbit at the behest of the company’s founder (Peter Beck). It is known as the “Humanity Star“, a disco-like geodesic sphere that measures 1 meter (3.3 ft) in diameter and will form a bright spot in the sky that will be visible to people on Earth.

The Humanity Star is central to Beck’s vision of how space travel can improve the lives of people here on Earth. In addition to presenting extensive opportunities for scientific research, there is also the way it fosters a sense of unity between people and nations. This is certainly a defining feature of the modern space age, where cooperation has replaced competition as the main driving force.

The Electron rocket prepping for its second launch last weekend. Credit: Rocket Lab

As Beck explained to ArsTechnica in an interview before the launch:

“The whole point of the program is to get everybody looking up at the star, but also past the star into the Universe, and reflect about the fact that we’re one species, on one planet. This is not necessarily part of the Rocket Lab program; it’s more of a personal program. It’s certainly consistent with our goal of trying to democratize space.”

Like the Electron rocket, the Humanity Sphere is made of carbon fiber materials and it’s surface consists of 65 highly-reflective panels. Once it reaches an orbit of 300 by 500 km (186 by 310 mi), it will spend the next nine months there reflecting the light of the Sun back to Earth. Whether or not it will be visible to the naked eye remains to be seen, but Rocket Lab is confident it will be.

According to Beck, the sphere will be more visible than a Iridium flare, which are easily spotted from the surface. These flares occur when the solar panels or antennae of an Iridium satellite reflect sunlight in orbit. “Most people will be familiar with the Iridium flares, and this has got much, much more surface area than an Iridium flare,” Beck said. “In theory, it will be easy to find.”

The payload will last for about nine months in orbit. Credit: Rocket Lab

Beck got the idea for the project from talking to people about where they live. In his experience, people tend to think of their locality or nationality when they think of home. Whereas many people he had spoken to were aware that they lived on planet Earth, they were oblivious to where the Earth resided in the Solar System or the Universe at large. In this respect, the Humanity Sphere is meant to encourage people to look and think beyond.

As he states on the website the company created for the Humanity Sphere:

“For millennia, humans have focused on their terrestrial lives and issues. Seldom do we as a species stop, look to the stars and realize our position in the universe as an achingly tiny speck of dust in the grandness of it all.

“Humanity is finite, and we won’t be here forever. Yet in the face of this almost inconceivable insignificance, humanity is capable of great and kind things when we recognize we are one species, responsible for the care of each other, and our planet, together. The Humanity Star is to remind us of this.

“No matter where you are in the world, rich or in poverty, in conflict or at peace, everyone will be able to see the bright, blinking Humanity Star orbiting Earth in the night sky. My hope is that everyone looking up at the Humanity Star will look past it to the expanse of the universe, feel a connection to our place in it and think a little differently about their lives, actions and what is important.

“Wait for when the Humanity Star is overhead and take your loved ones outside to look up and reflect. You may just feel a connection to the more than seven billion other people on this planet we share this ride with.”

The Electron rocket launching on Sunday afternoon, 2:42pm, New Zealand time. Credit: Rocket Lab

The Humanity Star can also be tracked via the website. As of the penning of this article, it is moving south of the equator and should be visible to those living along the west coast of South America. So if you live in Colombia, Peru or Chile, look to the western skies and see if you can’t spot this moving star. After passing south over Antarctica, it will reemerge in the night skies over Central Asia.

Without a doubt, the Humanity Sphere is an inspired creation, and one which is in good company. Who can forget the “Blue Marble” picture snapped by the Apollo 17 astronauts, or Voyager 1‘s “pale blue dot” photo? And even for those who are too young to have witnessed it, the images of Neil Armstrong and Buzz Aldrin setting foot on the Moon still serve to remind us of how far we’ve come, and how much still awaits us out there.

Further Reading: ArsTechnica

When James Webb Finally Reaches Space, Here’s What it’ll be Hunting

Artist conception of the James Webb Space Telescope. Credit: NASA

Ever since the project was first conceived, scientists have been eagerly awaiting the day that the James Webb Space Telescope (JWST) will take to space. As the planned successor to Hubble, the JWST will use its powerful infrared imaging capabilities to study some of the most distant objects in the Universe (such as the formation of the first galaxies) and study extra-solar planets around nearby stars.

However, there has been a lot of speculation and talk about which targets will be the JWST’s first. Thankfully, following the recommendation of the Time Allocation Committee and a thorough technical review, the Space Telescope Science Institute (STScI) recently announced that it has selected thirteen science “early release” programs, which the JWST will spend its first five months in service studying.

As part of the JWST Director’s Discretionary Early Release Science Program (DD-ERS), these thirteen targets were chosen by a rigorous peer-review process. This consisted of 253 investigators from 18 counties and 106 scientific institutions choosing from over 100 proposals. Each program has been allocated 500 hours of observing time, once the 6-month commissioning period has ended.

The JWST’s Optical Telescope element/Integrated Science instrument module (OTIS) undergoing testing at NASA’s Johnson Space Center. Credit: NASA/Desiree Stover

As Ken Sembach, the director of the Space Telescope Science Institute (STScI), said in an ESA press statement:

We were impressed by the high quality of the proposals received. These programmes will not only generate great science, but will also be a unique resource for demonstrating the investigative capabilities of this extraordinary observatory to the worldwide scientific communityWe want the research community to be as scientifically productive as possible, as early as possible, which is why I am so pleased to be able to dedicate nearly 500 hours of director’s discretionary time to these early release science observations.”

Each program will rely on the JWST’s suite of four scientific instruments, which have been contributed by NASA, the European Space Agency (ESA) and the Canadian Space Agency (CSA). These include the the Near-Infrared Spectrograph (NIRSpec) and the Mid-Infrared Instrument (MIRI) developed by the ESA, as well as the Near-Infrared Camera (NIRCam) developed by NASA and the STScI, and the Near-Infrared Imager and Slitless Spectrograph (NIRISS) developed by the CSA.

The thirteen programs selected include “Through the looking GLASS“, which will rely on the astronomical community’s experience using Hubble to conduct slitless spectroscopy and previous surveys to gather data on galaxy formation and the intergalactic medium, from the earliest epochs of the Universe to the present day. The Principal Investigator (PI) for this program is Tommaso Treu of the University of California Los Angeles.

Once deployed, the JWST will conduct a variety of science missions aimed at improving our understanding of the Universe. Credit: NASA/STScI

Another is the Cosmic Evolution Early Release Science (CEERS) program, which will conduct overlapping observations to create a coordinated extragalactic survey. This survey is intended to let astronomers see the first visible light of the Universe (ca. 240,000 to 300,000 years after the Big Bang), as well as information from the Reionization Epoch (ca. 150 million to 1 billion years after the Big Bang) and the period when the first galaxies formed. The PI for this program is Steven Finkelstein of the University of Texas at Austin.

Then there’s the Transiting Exoplanet Community Early Release Science Program, which will build on the work of the Hubble, Spitzer, and Kepler space telescopes by conducting exoplanet surveys. Like its predecessors, this will consist of monitoring stars for periodic dips in brightness that are caused by planets passing between them and the observer (aka. Transit Photometry).

However, compared to earlier missions, the JWST will be able to study transiting planets in unprecedented detail, which is anticipated to reveal volumes about their respective atmospheric compositions, structures and dynamics. This program, for which the PI is Imke de Pater from the University of California Berkeley, is therefore expected to revolutionize our understanding of planets, planet formation, and the origins of life.

Also focused on the study of exoplanets is the High Contrast Imaging of Exoplanets and Extraplanetary Systems program, which will focus on directly imaged planets and circumstellar debris disks. Once again, the goal is to use the JWST’s enhanced capabilities to provide detailed analyses on the atmospheric structure and compositions of exoplanets, as well as the cloud particle properties of debris disks.

Artist’s impression of the planet orbiting a red dwarf star. Credit: ESO/M. Kornmesser

But of course, not all the programs are dedicated to the study of things beyond our Solar System, as is demonstrated by the program that will focus on Jupiter and the Jovian System. Adding to the research performed by the Galileo and Juno missions, the JWST will use its suite of instruments to characterize and produce maps of Jupiter’s cloud layers, winds, composition, auroral activity, and temperature structure.

This program will also focus on some of Jupiter’s largest moons (aka. the “Galilean Moons”) and the planet’s ring structure. Data obtained by the JWST will be used to produce maps of Io’s atmosphere and volcanic surface, Ganymede’s tenuous atmosphere, provide constrains on these moons thermal and atmospheric structure, and search for plumes on their surfaces. As Alvaro Giménez, the ESA Director of Science, proclaimed:

“It is exciting to see the engagement of the astronomical community in designing and proposing what will be the first scientific programs for the James Webb Space Telescope. Webb will revolutionize our understanding of the Universe and the results that will come out from these early observations will mark the beginning of a thrilling new adventure in astronomy.”

During its mission, which will last for a minimum of five years (barring extensions), the JWST will also address many other key topics in modern astronomy, probing the Universe beyond the limits of what Hubble has been capable of seeing. It will also build on observations made by Hubble, examining galaxies whose light has been stretched into infrared wavelengths by the expansion of space.

The James Webb Space Telescope’s 18-segment primary mirror, a gold-coated beryllium mirror has a collecting area of 25 square meters. Credit: NASA/Chris Gunn

Beyond looking farther back in time to chart cosmic evolution, Webb will also examine the Supermassive Black Holes (SMBH) that lie at the centers of most massive galaxies – for the purpose of obtaining accurate mass estimates. Last, but not least, Webbwill focus on the birth of new stars and their planets, initially focusing on Jupiter-sized worlds and then shifting focus to study smaller super-Earths.

John C. Mather, the Senior Project Scientist for the JWST and a Senior Astrophysicist at NASA’s Goddard Space Flight Center, also expressed enthusiasm for the selected programs. “I’m thrilled to see the list of astronomers’ most fascinating targets for the Webb telescope, and extremely eager to see the results,” he said. “We fully expect to be surprised by what we find.”

For years, astronomers and researchers have been eagerly awaiting the day when the JWST begins gathering and releasing its first observations. With so many possibilities and so much waiting to be discovered, the telescope’s deployment (which is scheduled for 2019) is an event that can’t come soon enough!

Further Reading: ESA, STScI

They Just Began Casting the Giant Magellan Telescope’s 5th Mirror. What a Monster Job.

The fifth mirror for the GMT's 7 segment primary mirror is being cast at the Richard F. Caris Mirror Laboratory at the University of Arizona. In this image, a worker at the lab places the last piece of glass for mirror 5. Image: Giant Magellan Telescope Organization

The fifth mirror for the Giant Magellan Telescope (GMT) is now being cast, according to an announcement from the Giant Magellan Telescope Organization (GMTO), the body behind the project. The GMT is a ground-breaking segmented telescope consisting of 7 gigantic mirrors, and is being built at the Las Campanas Observatory, in Atacama, Chile.

The mirrors for the GMT are being cast at the Richard F. Caris Mirror Laboratory, at the University of Arizona. This lab is the world centre when it comes to building large mirrors for telescopes. But in a lab known for ground-breaking, precision manufacturing, the GMT’s mirrors are pushing the engineering to its limits.

This illustration shows what the Giant Magellan Telescope will look like when it comes online. The fifth of its seven mirror segments is being cast now. Each of the segments is a 20 ton piece of glass. Image: Giant Magellan
This illustration shows what the Giant Magellan Telescope will look like when it comes online. The fifth of its seven mirror segments is being cast now. Each of the segments is a 20 ton piece of glass. Image: Giant Magellan Telescope – GMTO Corporation

Seven separate mirrors, each the same size (8.4 meters,) will make up the GMT’s primary mirror. One mirror will be in the centre, and six will be arranged in a circle around it. Each one of these mirrors is a 20 ton glass behemoth, and each one is cast separately. Once the seven are manufactured (and one extra, just in case) they will be assembled at the observatory site.

The result will be an optical, light-gathering surface almost 24.5 meters (80 ft.) in diameter. That is an enormous telescope, and it’s taking extremely precise engineering and manufacturing to build these mirrors.

The glass for the mirrors is custom-manufactured, low-expansion glass from Japan. This glass comes as blocks, and each mirror requires exactly 17,481 kg of these glass blocks. A custom built furnace and mold heats the glass to 1165°C (2129°F) for several hours. The glass liquefies and flows into the mold. During this time, the mold is rotated at up to 5 rpm. Then the rotation is slowed, and for several months the glass cools in the mold.

After lengthy cooling, the glass can be polished. The tolerances for the mirrors, and the final shape they must take, requires very careful, extremely accurate polishing. The first mirror was cast in 2005, and in 2011 it was still being polished.

The mirrors for the GMT are not flat; they’re described as “potato chips.” They’re aspherical and parabaloidal. They have to be surface polished to an accuracy of 25 nanometers, which is a fraction of the wavelength of light.

Precision manufacturing is at the heart of the Giant Magellan Telescope. The surface of each mirror must be polished to within a fraction of the wavelength of light. Image: Giant Magellan Telescope Organization
Precision manufacturing is at the heart of the Giant Magellan Telescope. The surface of each mirror must be polished to within a fraction of the wavelength of light. Image: Giant Magellan Telescope Organization

“Casting the mirrors for the Giant Magellan Telescope is a huge undertaking, and we are very proud of the UA’s leading role creating this new resource for scientific discovery. The GMT partnership and Caris Mirror Lab are outstanding examples of how we can tackle complex challenges with innovative solutions,” said UA President Robert C. Robbins. “The University of Arizona has such an amazing tradition of excellence in space exploration, and I have been constantly impressed by the things our faculty, staff, and students in astronomy and space sciences can accomplish.”

Mirror construction for the GMT is a multi-stage process. The first mirror was completed several years ago and is in storage. Three others are in various stages of grinding and polishing. The glass for mirror 6 is in storage awaiting casting, and the glass for mirror 7 is on order from Japan.

Once completed, the GMT will be situated in Atacama, at the Las Campanas Observatory, where high-elevation and clear skies make for excellent seeing conditions. First light is planned for the mid 2020’s.

When the mirrors for the GMT are completed, they are transported in a special container with shock absorbers and insulation. In this image, the first completed mirror is moved from the Caris Mirror Lab to storage several miles away. Image: GMTO Corp.
When the mirrors for the GMT are completed, they are transported in a special container with shock absorbers and insulation. In this image, the first completed mirror is moved from the Caris Mirror Lab to storage several miles away. Image: GMTO Corp.

The GMT will be largest telescope in existence, at least until the Thirty Meter Telescope and the European Extremely Large Telescope supersede it.

“Creating the largest telescope in history is a monumental endeavor, and the GMT will be among the largest privately-funded scientific initiatives to date,” said Taft Armandroff, Professor of Astronomy and Director of the McDonald Observatory at The University of Texas at Austin, and Vice-Chair of the GMTO Corporation Board of Directors. “With this next milestone, and with the leadership, technical, financial and scientific prowess of the members of the GMTO partnership, we continue on the path to the completion of this great observatory.”

The power of the GMT will allow it to directly image extra-solar planets. That alone is enough to get anyone excited. But the GMT will also study things like the formation of stars, planets, and disks; the assembly and evolution of galaxies; fundamental physics; and first light and re-ionization.

The Giant Magellan Telescope is one of the world’s Super Telescopes that we covered in this series of articles. The Super Telescopes include the:

  • Giant Magellan Telescope
  • James Webb Space Telescope
  • Thirty Meter Telescope
  • European Extremely Large Telescope
  • Large Synoptic Survey Telescope
  • Wide Field Infrared Survey Telescope

You can also watch our videos on the Super Telescopes: Part 1: Ground Telescopes, and Part 2: Space Telescopes.

LIGO Scientists who Detected Gravitational Waves Awarded Nobel Prize in Physics

Barry C. Barish and Kip S. Thorne, two of the recipients for the 2017 Nobel Prize in physics for their work with gravitational wave research. Credit: Caltech

In February of 2016, scientists working for the Laser Interferometer Gravitational-Wave Observatory (LIGO) made history when they announced the first-ever detection of gravitational waves. Since that time, multiple detections have taken place and scientific collaborations between observatories  – like Advanced LIGO and Advanced Virgo – are allowing for unprecedented levels of sensitivity and data sharing.

Not only was the first-time detection of gravity waves an historic accomplishment, it ushered in a new era of astrophysics. It is little wonder then why the three researchers who were central to the first detection have been awarded the 2017 Nobel Prize in Physics. The prize was awarded jointly to Caltech professors emeritus Kip S. Thorne and Barry C. Barish, along with MIT professor emeritus Rainer Weiss.

To put it simply, gravitational waves are ripples in space-time that are formed by major astronomical events – such as the merger of a binary black hole pair. They were first predicted over a century ago by Einstein’s Theory of General Relativity, which indicated that massive perturbations would alter the structure of space-time. However, it was not until recent years that evidence of these waves was observed for the first time.

The first signal was detected by LIGO’s twin observatories – in Hanford, Washington, and Livingston, Louisiana, respectively – and traced to a black mole merger 1.3 billion light-years away. To date, four detections have been, all of which were due to the mergers of black-hole pairs. These took place on December 26, 2015, January 4, 2017, and August 14, 2017, the last being detected by LIGO and the European Virgo gravitational-wave detector.

For the role they played in this accomplishment, one half of the prize was awarded jointly to Caltech’s Barry C. Barish – the Ronald and Maxine Linde Professor of Physics, Emeritus – and Kip S. Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus. The other half was awarded to Rainer Weiss, Professor of Physics, Emeritus, at the Massachusetts Institute of Technology (MIT).

As Caltech president Thomas F. Rosenbaum – the Sonja and William Davidow Presidential Chair and Professor of Physics – said in a recent Caltech press statement:

“I am delighted and honored to congratulate Kip and Barry, as well as Rai Weiss of MIT, on the award this morning of the 2017 Nobel Prize in Physics. The first direct observation of gravitational waves by LIGO is an extraordinary demonstration of scientific vision and persistence. Through four decades of development of exquisitely sensitive instrumentation—pushing the capacity of our imaginations—we are now able to glimpse cosmic processes that were previously undetectable. It is truly the start of a new era in astrophysics.”

This accomplishment was all the more impressive considering that Albert Einstein, who first predicted their existence, believed gravitational waves would be too weak to study. However, by the 1960s, advances in laser technology and new insights into possible astrophysical sources led scientists to conclude that these waves might actually be detectable.

The first gravity wave detectors were built by Joseph Weber, an astrophysics from the University of Maryland. His detectors, which were built in the 1960s, consisted of large aluminum cylinders  that would be driven to vibrate by passing gravitational waves. Other attempts followed, but all proved unsuccessful; prompting a shift towards a new type of detector involving interferometry.

One such instrument was developed by Weiss at MIT, which relied on the technique known as laser interferometry. In this kind of instrument, gravitational waves are measured using widely spaced and separated mirrors that reflect lasers over long distances. When gravitational waves cause space to stretch and squeeze by infinitesimal amounts, it causes the reflected light inside the detector to shift minutely.

At the same time, Thorne – along with his students and postdocs at Caltech – began working to improve the theory of gravitational waves. This included new estimates on the strength and frequency of waves produced by objects like black holes, neutron stars and supernovae. This culminated in a 1972 paper which Throne co-published with his student, Bill Press, which summarized their vision of how gravitational waves could be studied.

That same year, Weiss also published a detailed analysis of interferometers and their potential for astrophysical research. In this paper, he stated that larger-scale operations – measuring several km or more in size – might have a shot at detecting gravitational waves. He also identified the major challenges to detection (such as vibrations from the Earth) and proposed possible solutions for countering them.

Barry C. Barish and Kip S. Thorne, two of three recipients of the 2017 Nobel Prize in Physics. Credit: Caltech

In 1975, Weiss invited Thorne to speak at a NASA committee meeting in Washington, D.C., and the two spent an entire night talking about gravitational experiments. As a result of their conversation, Thorne went back to Calteh and proposed creating a experimental gravity group, which would work on interferometers in parallel with researchers at MIT, the University of Glasgow and the University of Garching (where similar experiments were being conducted).

Development on the first interferometer began shortly thereafter at Caltech, which led to the creation of a 40-meter (130-foot) prototype to test Weiss’ theories about gravitational waves. In 1984, all of the work being conducted by these respective institutions came together. Caltech and MIT, with the support of the National Science Foundation (NSF) formed the LIGO collaboration and began work on its two interferometers in Hanford and Livingston.

The construction of LIGO was a major challenge, both logistically and technically. However, things were helped immensely when Barry Barish (then a Caltech particle physicist) became the Principal Investigator (PI) of LIGO in 1994. After a decade of stalled attempts, he was also made the director of LIGO and put its construction back on track. He also expanded the research team and developed a detailed work plan for the NSF.

As Barish indicated, the work he did with LIGO was something of a dream come true:

“I always wanted to be an experimental physicist and was attracted to the idea of using continuing advances in technology to carry out fundamental science experiments that could not be done otherwise. LIGO is a prime example of what couldn’t be done before. Although it was a very large-scale project, the challenges were very different from the way we build a bridge or carry out other large engineering projects. For LIGO, the challenge was and is how to develop and design advanced instrumentation on a large scale, even as the project evolves.”

LIGO’s two facilities, located in Livingston, Louisiana, and Hanford, Washington. Credit: ligo.caltech.edu

By 1999, construction had wrapped up on the LIGO observatories and by 2002, LIGO began to obtain data. In 2008, work began on improving its original detectors, known as the Advanced LIGO Project. The process of converting the 40-m prototype to LIGO’s current 4-km (2.5 mi) interferometers was a massive undertaking, and therefore needed to be broken down into steps.

The first step took place between 2002 and 2010, when the team built and tested the initial interferometers. While this did not result in any detections, it did demonstrate the observatory’s basic concepts and solved many of the technical obstacles. The next phase – called Advanced LIGO, which took placed between 2010 and 2015 – allowed the detectors to achieve new levels of sensitivity.

These upgrades, which also happened under Barish’s leadership, allowed for the development of several key technologies which ultimately made the first detection possible. As Barish explained:

“In the initial phase of LIGO, in order to isolate the detectors from the earth’s motion, we used a suspension system that consisted of test-mass mirrors hung by piano wire and used a multiple-stage set of passive shock absorbers, similar to those in your car. We knew this probably would not be good enough to detect gravitational waves, so we, in the LIGO Laboratory, developed an ambitious program for Advanced LIGO that incorporated a new suspension system to stabilize the mirrors and an active seismic isolation system to sense and correct for ground motions.”

Rainer Weiss, famed MIT physicist and partial winner of the 2017 Nobel Prize in Physics. Credit: MIT/Bryce Vickmark

Given how central Thorne, Weiss and Barish were to the study of gravitational waves, all three were rightly-recognized as this year’s recipients of the Nobel Prize in Physics. Both Thorne and Barish were notified that they had won in the early morning hours on October 3rd, 2017. In response to the news, both scientists were sure to acknowledge the ongoing efforts of LIGO, the science teams that have contributed to it, and the efforts of Caltech and MIT in creating and maintaining the observatories.

“The prize rightfully belongs to the hundreds of LIGO scientists and engineers who built and perfected our complex gravitational-wave interferometers, and the hundreds of LIGO and Virgo scientists who found the gravitational-wave signals in LIGO’s noisy data and extracted the waves’ information,” said Thorne. “It is unfortunate that, due to the statutes of the Nobel Foundation, the prize has to go to no more than three people, when our marvelous discovery is the work of more than a thousand.”

“I am humbled and honored to receive this award,” said Barish. “The detection of gravitational waves is truly a triumph of modern large-scale experimental physics. Over several decades, our teams at Caltech and MIT developed LIGO into the incredibly sensitive device that made the discovery. When the signal reached LIGO from a collision of two stellar black holes that occurred 1.3 billion years ago, the 1,000-scientist-strong LIGO Scientific Collaboration was able to both identify the candidate event within minutes and perform the detailed analysis that convincingly demonstrated that gravitational waves exist.”

Looking ahead, it is also pretty clear that Advanved LIGO, Advanced Virgo and other gravitational wave observatories around the world are just getting started. In addition to having detected four separate events, recent studies have indicated that gravitational wave detection could also open up new frontiers for astronomical and cosmological research.

For instance, a recent study by a team of researchers from the Monash Center for Astrophysics proposed a theoretical concept known as ‘orphan memory’. According to their research, gravitational waves not only cause waves in space-time, but leave permanent ripples in its structure. By studying the “orphans” of past events, gravitational waves can be studied both as they reach Earth and long after they pass.

In addition, a study was released in August by a team of astronomers from the Center of Cosmology at the University of California Irvine that indicated that black hole mergers are far more common than we thought. After conducting a survey of the cosmos intended to calculate and categorize black holes, the UCI team determined that there could be as many as 100 million black holes in the galaxy.

Another recent study indicated that the Advanced LIGO, GEO 600, and Virgo gravitational-wave detector network could also be used to detect the gravitational waves created by supernovae. By detecting the waves created by star that explode near the end of their lifespans, astronomers could be able to see inside the hearts of collapsing stars for the first time and probe the mechanics of black hole formation.

The Nobel Prize in Physics is one of the highest honors that can be bestowed upon a scientist. But even greater than that is the knowledge that great things resulted from one’s own work. Decades after Thorne, Weiss and Barish began proposing gravitational wave studies and working towards the creation of detectors, scientists from all over the world are making profound discoveries that are revolutionizing the way we think of the Universe.

And as these scientists will surely attest, what we’ve seen so far is just the tip of the iceberg. One can imagine that somewhere, Einstein is also beaming with pride. As with other research pertaining to his theory of General Relativity, the study of gravitational waves is demonstrating that even after a century, his predictions were still bang on!

And be sure to check out this video of the Caltech Press Conference where Barish and Thorn were honored for their accomplishments:

Further Reading: NASA, Caltech

Proposed Hyperloop Route Between Toronto and Montreal!

An artist's rendering of a hyperloop on a track leading to downtown Toronto. A pitch for a hyperloop linking Toronto and Montreal via Ottawa is among 10 winners of a global challenge. Credit: transpodhyperloop.com

In 2012, SpaceX founder Elon Musk unveiled his idea for what he called the “fifth mode of transportation”. Known as the Hyperloop, his proposal called for the creation of a high-speed mass transit system where aluminum pod cars traveled through a low-pressure steel tube. This system, he claimed, would be able to whisk passengers from San Francisco to Los Angeles in just 35 minutes.

Since that time, many companies have emerged that are dedicated to making this proposal a reality, which include the Los Angeles-based company known as Hyperloop One. Back in 2016, this company launched the Hyperloop One Global Challenge to determine where Hyperloop routes should be built. Earlier this month, the winners of this competition were announced, which included the team recommending a route from Toronto to Montreal.

The Toronto-Montreal team (aka. team HyperCan) was just one of over 2600 teams that registered for the competition, a combination of private companies, engineers, and urban planners. After the field was narrowed down to the 35 strongest proposals, ten finalists were selected. These included team HyperCan, as well as teams from India, Mexico, the UK and the US.

The Toronto-to-Montreal route, one of ten winning proposals submitted to the Hyperloop One Global Challenge. Credit: hyperloop-one.com

As Rob Lloyd, the CEO of Hyperloop One, said about the competition in a company statement:

“The results of the Hyperloop One Global Challenge far exceeded our expectations. These 10 teams each had their unique strengths in showcasing how they will alleviate serious transportation issues in their regions… Studies like this bring us closer to our goal of implementing three full-scale systems operating by 2021.”

Team HyperCAN was led by AECOM Canada, the Canadian-subsidiary of the multinational engineering firm. For their proposal, they considered how a Hyperloop system would address the transportation needs of Canada’s largest megacity region. This region is part of what is sometimes referred to as the Quebec City-Windsor corridor, which has remained the most densely-populated region in modern Canadian history.

The region that extends from Montreal to Toronto, and includes the nation’s capitol of Ottawa, is by the far the most populated part of this corridor. It is the fourth most populous region in North America, with roughly 1 in 4 Canadians – over 13 million people – living in a region that measures 640 km (400 mi) long. Between the density, urban sprawl, and sheer of volume of business that goes on in this area, traffic congestion is a natural problem.

In fact, traveling from Montreal to Ottawa to Toronto can take a minimum of five hours by car, and the highway connections between them – Highway 417 (the “Queensway”) and Highway 401 – are the busiest in all of Canada. Within the greater metropolitan area of Toronto alone, the average daily traffic on the 401 is about 450,000 vehicles, and this never drops below 20,000 vehicles between urban centers.

In Montreal, the situation is much the same. In an average year, commuters spend an estimated 52 hours stuck in peak hour traffic, which earned the city the dubious distinction of having the worst commute in the country. To make matters worse, it is anticipated that population and urban growth are going to make congestion grow by about 6% in the next few years (by 2020).

Hence why team HyperCAN thinks a Hyperloop network would be ideally suited for this corridor. Not only would it offer commuters an alternative to driving on busy highways, it would also address the current lack of rapid and on-demand mass transport in this region. According to AECOM Canada’s proposal:

“No mode of transportation has existing or planned capacity to accommodate the growth in traffic along this corridor. By moving higher volumes of people in less time, Hyperloop could generate greater returns socially and provide much-needed capacity to accommodate the forecasted growth in demand for travel in the corridor.”

The benefits of such a high-speed transit system are also quite clear. Based on its top projected speeds, a Hyperloop trip between Ottawa and Toronto – which ideally takes about 3 hours by car – could be reduced to 27 minutes. A trip from Montreal to Ottawa could be done in 12 minutes instead of 2 hours, and a trip between Toronto and Montreal could be done in just 39 minutes.

And since the Hyperloop would make its transit from city-center to city-center, it offers something that high-speed rail and air travel do not – on-demand connections between cities. The existence of such a system could therefore attract business, investment, workers and skilled professionals to the region and allow the Toronto-Montreal corridor to gain an advantage in the global economy.

Of course, whenever major projects come up, it’s only a matter of time before the all-important aspect of cost rears its head. However, as Hyperloop One indicated, such a project could benefit from existing infrastructure spending in Canada. Recently, the Trudeau administration created an infrastructure bank that pledged $81.2 billion CAD ($60.8 billion USD) in spending over the next 12 years for public transit, transport/trade corridors, and green infrastructure.

A Hyperloop that connects three of Canada’s largest and most dynamic cities together certainly meets all these criteria. In fact, according to team HyperCAN, green infrastructure would be yet another benefit of a Toronto-Montreal Hyperloop system. As they argued in their proposal, the Hyperloop can be powered by hydro or other renewables and would be 100% emissions-free.

This would be consistent with the Canadian government commitment to reducing carbon emissions by 30% by 2030 (from their 2005 levels). According to figures compiled by Environment and Climate Change Canada, in 2015:

“Canada’s total greenhouse gas (GHG) emissions were 722 megatonnes (Mt) of carbon dioxide equivalent (CO2 eq). The oil and gas sector was the largest GHG emitter in Canada, accounting for 189 Mt CO2 eq (26% of total emissions), followed closely by the transportation sector, which emitted 173 Mt CO2 eq (24%).”

By allowing commuters to switch to a mass transit system that would reduce the volume of cars traveling between cities, and produces no emissions itself, a Hyperloop would help Canadians meet their reduced-emission goals. Last, but certainly not least, there is the way that such a system would create opportunities for economic growth and cooperation between Canada and the US.

On the other side of the border from the Quebec City-Windsor Corridor, there is the extended urban landscape that includes the cities of Chicago, Detroit, Cincinnati, Cleveland, Columbus, Indianaopli, Pittsburgh, and St. Louis. This transnational mega-region, which has over 55 million people living within it, is sometimes referred to as the Great Lakes Megalopolis.

Not only would a Hyperloop connection between two of its northernmost urban centers offer opportunities for cross-border commerce, it would also present the possibility of extending this line down into the US. With a criss-cross pattern of Hyperloops that can whisk people from St. Louis and Pittsburgh to Montreal, business would move at a speed never before seen!

Given the litany of reasons for building a Hyperloop along this corridor, it should come as no surprise that AECOM and team HyperCAN are not alone in proposing that it be built. TransPod Inc, a Toronto-based Hyperloop company, is also interested in constructing Hyperloop lines in countries where aging infrastructure, high-density populations, and a need for new transportation networks coincide.

As Sebastien Gendron, the CEO of TransPod, recently indicated in an interview with Huffington Post Canada, his company hopes to have a Hyperloop up and running in Canada by 2025. He also expressed high-hopes that the public will embrace this new form of transit once its available. “We already travel at that speed with an aircraft and the main difference with our system is we are on the ground,” he said. “And it’s safer to be on the ground than in the air.”

According to Gendron, TransPod is currently engaged in talks with the federal transportation department to ensure safety regulations are in place for when the technology is ready to be implemented. In addition, his company is also bidding for provincial and city support to build a 4 to 10 km (2.5 to 6 mi) track between the cities of Calgary-Edmonton in Alberta, which would connect the roughly 3 million people living there.

When Musk first unveiled his vision for the Hyperloop, he indicted that he was too busy with other projects to pursue it, but others were free to take a crack at it. In the five years that have followed, several companies have emerged that have been more than happy to oblige him. And Musk, to his credit, has offered support by holding events like Pod Design Competitions and offering the use of his company’s own test track.

And despite misgivings by those who claimed that such a system posed too many technical and engineering challenges – not to mention that the cost would be prohibitive – those who are committing to building Hyperloops remain undeterred. With every passing year, the challenges seem that much more surmountable, and support from the public and private sector is growing.

By the 2020s and 2030s, we could very well be seeing Hyperloops running between major cities in every mega-region in the world. These could include Toronto and Montreal, Boston and New York, Los Angeles and San Fransisco, Moscow and St. Petersburg, Tokyo to Nagoya, Mumbai to New Delhi, Shanghai to Beijing, and London to Edinburgh.

Of course, that’s just for starters!

Further Reading: Hyperloop One, Huffington Post Canada

Second Hyperloop Pod Design Competition A Success

At the recent ceremony for the Hyperloop Pod Competition, Musk announced that his concept for a high-speed train might work better on Mars. Credit: HTT

Back in 2012, Elon Musk proposed a revolutionary idea that he described as the “fifth form of transportation“. Known as the Hyperloop, his proposal called for the creation of a high-speed mass transit system where aluminum pod cars traveled through a low-pressure steel tube. This system, he claimed, would be able to whisk passengers from San Francisco to Los Angeles in just 35 minutes.

At the time, Musk claimed he was too busy to build such a system, but that others were free to take a crack at it. Nevertheless, the SpaceX founder has remained involved in the Hyperloop’s development by hosting the Hyperloop Pod Design Competition, an incentive competition involving student and engineering teams. The second of these competitions was recently held and featured some impressive pods achieving impressive speeds.

The Pod Design Competition was first announced in June of 2015, and was quickly joined by over 700 teams. By January of 2017, 100 teams were selected to take part in the first competition, which was held from January 27th to 29th at the SpaceX’s Hyperloop Test Track (located in Hawthorne, California). Also known as the Hypertube, this track consists of a partial-vacuum steel tube that measures 1.6 km (1 mi) long and 1.83 meters in diameter.

Team MIT’s Hyperloop pod car design. Credit: MIT/Twitter

The winning design, which was provided by a team from MIT, consisted of a car that would rely on electrodynamic suspension to achieve a cruising speed of 110 m/s (396 km/h; 246 mph). Based on the positive response and submissions from the first competition, SpaceX decided to hold the Hyperloop Pod Competition II, which took place this past weekend (August 25th to 27th, 2017) at their Hypertube test track.

Whereas the first competition involved a series of tests designed to accelerate the development of prototypes, the second had only one criterion: maximum speed. The competition was open to both new and returning teams, the latter of which had already tested their pods in the first competition. Twenty-five teams registered in the competition, representing universities and technical institutions from all over the world.

But in the end, only three teams made the cut and competed on Sunday, August 27th, having met the pre-run criteria. The winning entry came from WARR Hyperloop, a team made up of student from the Technical University of Munich. During the test run, their pod achieved a top speed of 324 km/h ( 201 mph), which was far in excess of the second place team.

It was even more impressive than WARR’s previous test run during the first competition – where their pod achieved a maximum speed of about 93 km (58 mph). The WARR pod was also the only one that attempted to reach its maximum speed during the competition. Their success was due in part to the pod’s design, which is fabricated from carbon fiber to ensure that it is lightweight and durable.

https://www.instagram.com/p/BYU1EttgwJE/?taken-by=elonmusk

Musk praised the teams effort during the competition and took to Twitter to post the results of the latest pod tests. As he tweeted at 17:32, shortly after the test run, “Congratulations to WARR team from Tech Univ Munich for winning 2nd competition! Peak speed of 324 km/h, which is over 200 mph!!”

Some additional comments followed later that day and on the following morning (Monday, August 28th):

“Might be possible to go supersonic in our test Hyperloop tube, even though it’s only 0.8 miles long. Very high accel/decel needed… To be clear, a Hyperloop passenger version wouldn’t have intense light strobe effect (just for testing), nor uncomfortable acceleration.”

“Btw, high accel only needed because tube is short. For passenger transport, this can be spread over 20+ miles, so no spilt drinks.”

“Will run the SpaceX pusher sled later this week and see what it can do.”

Musk posted the video of the WARR pod’s performance on Twitter and to his Instagram account (see below). He  also announced that SpaceX and his latest startup – The Boring Company – will be hosting a third pod design competition next year. The stakes, he claimed, would be even higher for this competition, with pods expected to reach speeds of over 500 km/h (310 mi) on the test track.

While this is still far from the speeds that Musk originally envisioned in his white paper – up to 1280 km/h (800 mph) – it does represent a significant progression. And with six startups now looking to make the Hyperloop a reality – including Hyperloop Transportation Technologies (HTT) and Hyperloop One – the only question is, how long before the “fifth mode of transportation” becomes a reality?

Be sure to check out this video of the test track during the first Pod Design Competition, courtesy of SpaceX:

Further Reading: TechCrunch, SpaceX, Instagram

Experiment Detects Mysterious Neutrino-Nucleus Scattering For the First Time

The Spallation Neutron Source, located at the Oak Ridge National Laboratory. Credit: neutrons.ornl.gov

Neutrinos are one of the fundamental particles that make up the Universe. Compared to other types of particles, they have very little mass, no charge, and only interact with others via the weak nuclear force and gravity. As such, finding evidence of their interactions is extremely difficult, requiring massive instruments located deep underground to shield them from any interference.

However, using the Spallation Neutron Source (SNS), a research facility located at the Oak Ridge National Laboratory (ORNL) – an international team of researchers recently made a historic discovery about neutrinos using an entirely different method. As part of the COHERENT experiment, these results confirm a prediction made 43 years ago and offers new possibilities for neutrino research.

Continue reading “Experiment Detects Mysterious Neutrino-Nucleus Scattering For the First Time”

Need a Job? NASA is Looking for a New Planetary Protection Officer

In the future, planetary protection (where we ensure that missions do not contaminate other words with Earth-borne organisms) will be especially important. Credit: NASA, JPL-Caltech

NASA has always had its fingers in many different pies. This should come as no surprise, since the advancement of science and the exploration of the Universe requires a multi-faceted approach. So in addition to studying Earth and distant planets, the also study infectious diseases and medical treatments, and ensuring that food, water and vehicles are safe. But protecting Earth and other planets from contamination, that’s a rather special job!

For decades, this responsibility has fallen to the NASA Office of Planetary Protection, the head of which is known as the Planetary Protection Officer (PPO). Last month, NASA announced that it was looking for a new PPO, the person whose job it will be to ensure that future missions to other planets don’t contaminate them with microbes that have come along for the ride, and that return missions don’t bring extra-terrestrial microbes back to Earth.

Since the beginning of the Space Age, federal agencies have understood that any and all missions carried with them the risk of contamination. Aside from the possibility that robotic or crewed missions might transport Earth microbes to foreign planets (and thus disrupt any natural life cycles there), it was also understood that missions returning from other bodies could bring potentially harmful organisms back to Earth.

The US won the space race against its adversary, the USSR. The image of the American flag planted on the Moon, being saluted by an American astronaut, must have caused great consternation in the Kremlin. Will SpaceX's mission to Mars cause the same consternation? Will Russia and other nations use the mission to remind the US of their Outer Space Treaty obligations? Image: NASA
Back when NASA was still in the midst of the Apollo Program, it was decided that steps needed to be taken to ensure that missions to other bodies did not cause contaminated. Credit: NASA

As such, the Office of Planetary Protection was established in 1967 to ensure that these risks were mitigated using proper safety and sterilization protocols. This was shortly after the United Nation’s Office of Outer Space Affairs (UNOOSA) drafted the Outer Space Treaty, which was signed by the United States, the United Kingdom, and the Soviet Union (as of 2017, 107 countries have become party to the treaty).

The goals of the Office of Planetary Protection are consistent with Article IX of the Outer Space Treaty; specifically, the part which states:

“States Parties to the Treaty shall pursue studies of outer space, including the Moon and other celestial bodies, and conduct exploration of them so as to avoid their harmful contamination and also adverse changes in the environment of the Earth resulting from the introduction of extraterrestrial matter and, where necessary, shall adopt appropriate measures for this purpose.”

The office and its practices are also consistent with NASA’s internal policies. These include NASA Policy Directive (NPR) 8020.12D: “Planetary Protection Provisions for Robotic Extraterrestrial Missions”, and 8020.7: “Biological Contamination Control for Outbound and Inbound Planetary Spacecraft”, which require that all missions comply with protection procedures.

For decades, these directives have been followed to ensure that missions to the Moon, Mars and the Outer Solar System did not threaten these extra-terrestrial environments. For example, after eight years studying Jupiter and its largest moons, the Galileo probe was deliberately crashed into Jupiter’s atmosphere to ensure that none of its moons (which could harbor life beneath their icy surfaces) were contaminated by Earth-based microbes.

Artist’s concept of the Galileo space probe passing through the Jupiter system. Credit: NASA

The same procedure will be followed by the Juno mission, which is currently in orbit around Jupiter. Barring a possible mission extension, the probe is scheduled to be deorbited after conducting a total of 12 orbits of the gas giant. This will take place in July of 2018, at which point, the craft will burn up to avoid contaminating the Jovian moons of Europa, Ganymede and Callisto.

The same holds true for the Cassini spacecraft, which is currently passing between Saturn and its system of rings, as part of the mission’s Grand Finale. When this phase of its mission is complete – on September 15th, 2017 – the probe will be deorbited into Saturn’s atmosphere to prevent any microbes from reaching Enceladus, Titan, Dione, moons that may also support life in their interiors (or in Titan’s case, even on its surface!)

To be fair, the position of a Planetary Protection Officer is not unique to NASA. The European Space Agency (ESA), the Japanese Aerospace and Exploration Agency (JAXA) and other space agencies have similar positions. However, it is only within NASA and the ESA that it is considered to be a full-time job. The position is held for three years (with a possible extension to five) and is compensated to the tune of $124,406 to $187,000 per year.

The job, which can be applied for on USAJOBS.gov (and not through the Office of Planetary Protection), will remain open until August 18th, 2017. According to the posting, the PPO will be responsible for:

  • Leading planning and coordinating activities related to NASA mission planetary protection needs.
  • Leading independent evaluation of, and providing advice regarding, compliance by robotic and human spaceflight missions with NASA planetary protection policies, statutory requirements and international obligations.
  • Advising the Chief, SMA and other officials regarding the merit and implications of programmatic decisions involving risks to planetary protection objectives.
  • In coordination with relevant offices, leading interactions with COSPAR, National Academies, and advisory committees on planetary protection matters.
  • Recommending and leading the preparation of new or revised NASA standards and directives in accordance with established processes and guidelines.

What’s more, the fact that NASA is advertising the position is partly due to some recent changes to the role. As Catharine Conley*, NASA’s only planetary protection officer since 2014, indicated in a recent interview with Business Insider: “This new job ad is a result of relocating the position I currently hold to the Office of Safety and Mission Assurance, which is an independent technical authority within NASA.”

While the position has been undeniably important in the past, it is expected to become of even greater importance given NASA’s planned activities for the future. This includes NASA’s proposed “Journey to Mars“, a crewed mission which will see humans setting foot on the Red Planet sometime in the 2030s. And in just a few years time, the Mars 2020 rover is scheduled to begin searching the Martian surface for signs of life.

As part of this mission, the Mars 2020 rover will collect soil samples and place them in a cache to be retrieved by astronauts during the later crewed mission. Beyond Mars, NASA also hopes to conduct mission to Europa, Enceladus and Titan to look for signs of life. Each of these worlds have the necessary ingredients, which includes the prebiotic chemistry and geothermal energy necessary to support basic lifeforms.

Given that we intend to expand our horizons and explore increasingly exotic environments in the future – which could finally lead to the discovery of life beyond Earth – it only makes sense that the role of the Planetary Protection Officer become more prominent. If you think you’ve got the chops for it, and don’t mind a six-figure salary, be sure to apply soon!

*According to BI, Conley has not indicated if she will apply for the position again.

Further Reading: Business Insider, USAJOBS

Breakthrough Lofts the Smallest Satellites Ever, not Interstellar Yet, but a Step Forward

Project Starshot, an initiative sponsored by the Breakthrough Foundation, is intended to be humanity's first interstellar voyage. Credit: breakthroughinitiatives.org

In 2015, Russian billionaire Yuri Milner established Breakthrough Initiatives, a non-profit organization dedicated to enhancing the search for extraterrestrial intelligence (SETI). In April of the following year, he and the organization be founded announced the creation of Breakthrough Starshot, a program to create a lightsail-driven “wafercraft” that would make the journey to the nearest star system – Alpha Centauri – within our lifetime.

This past June, the organization took a major step towards achieving this goal. After hitching a ride on some satellites being deployed to Low Earth Orbit (LEO), Breakthrough conducted a successful test flight of its first spacecraft. Known as “Sprites”, these are not only the smallest spacecraft ever launched, but prototypes for the eventual wafercraft Starshot hopes to send to Alpha Centauri.

The concept for a wafercraft is simple. By leveraging recent developments in computing and miniaturization, spacecraft that are the size of a credit card could be created. These would be capable of carrying all the necessary sensors, microprocessors and microthrusters, but would be so small and light that it would take much less energy to accelerate them to relativistic speeds – in the case of Starshot, up to 20% the speed of light.

Artist’s illustration of a light-sail powered by a laser beam (red) generated on Earth’s surface. Credit: M. Weiss/CfA

As Pete Worden – Breakthrough Starshot’s executive director and the former director of NASA’s Ames Research Center – said in an interview with Scientific American:

“This is a very early version of what we would send to interstellar distances. In addition, this is another clear demonstration that it is possible for countries to work together to do great things in space. These are European spacecraft with U.S. nanosatellite payloads launching on an Indian booster—you can’t get much more international than that.”

Professor Abraham Loeb also has some choice words to mark this historic occasion. In addition to being the Frank B. Baird Jr. Professor of Science, the Chair of the Astronomy Department and the Director of the Institute for Theory and Computation at Harvard University, Prof. Loeb is also the chairman of the Breakthrough Starshot Advisory Committee. As he told Universe Today via email:

“The launch of the Sprite satellites marks the first demonstration that miniaturized electronics on small chips can be launched without damage, survive the harsh environment of space and communicate successfully with earth. The Starshot Initiative aims to launch similar chips attached to a lightweight sail that it being pushed by a laser beam to a fifth of the speed of light, so that its camera, communication and navigation devices (whose total weight is of order a gram) will reach the nearest planet outside the solar System within our generation.”

A prototype Sprite nanosatellite, showing its solar panel, microprocessors, sensors and transmitters. Credit: Zac Manchester

The craft were deployed on June 23rd, piggybacking on two satellites belonging to the multinational technology corporation OHB System AG. Much like the StarChips that Starshot is proposing, the Sprites represent a major step in the evolution of miniature spacecraft that can do the job of larger robotic explorers. They measure just 3.5 by 3.5 cm (1.378 x 1.378 inches) and weight only four grams (0.14 ounces), but still manage to pack solar panels, computers, sensors and radios into their tiny frames.

The Sprite were originally conceived by Zac Manchester, a postdoctorate researcher and aerospace engineer at Cornell University. Back in 2011, he launched a Kickstarter campaign (called “KickSat“) to raise funds to develop the concept, which was his way of bringing down the associated costs of spaceflight. The campaign was a huge success, with Manchester raising a total of $74,586 of his original goal of $30,000.

Now a member of Breakthrough Starshot (where he is in charge of Wafer design and optimization), Manchester oversaw the construction of the Sprites from the Sibley School of Mechanical and Aerospace Engineering at Cornell. As Professor Loeb explained:

“The Sprites project is led by Zac Manchester, a Harvard postdoc who started working on this during his PhD at Cornell. Sprites are chip-size satellites powered by sunlight, intended to be released in space to demonstrate a new technology of lightweight (gram-scale) spacecrafts that can communicated with Earth.”

Zac Manchester holding a prototype KickSat. Credit: Zac Manchester/kickstarer

The purpose of this mission was to test how well the Sprites’ electronics systems and radio communications performed in orbit. Upon deployment, the Sprites remained attached to these satellites (known as “Max Valier” and “Venta”) and began transmitting. Communications were then received from ground stations, which demonstrated that the Sprites’ novel radio communication architecture performed exactly as it was designed to.

With this test complete, Starshot now has confirmation that a waferocraft is capable of operating in space and communicating with ground-based controllers. In the coming months and years, the many scientists and engineers that are behind this program will no doubt seek to test other essential systems (such as the craft’s microthrusters and imagers) while also working on the various engineering concerns that an instellar mission would entail.

In the meantime, the Sprites are still transmitting and are in radio contact with ground stations located in California and New York (as well as radio enthusiasts around the world). For those looking to listen in on their communications, Prof. Loeb was kind enough to let us know what frequency they are transmitting on.

The radio frequency at which the Sprites that were just launched operate is 437.24 MHz, corresponding to a wavelength of roughly 69 cm,” he said. So if you’ve got a ham radio and feel like tuning in, this is where to set your dials!

And be sure to check out Zac Manchester’s Kickstarter video, which showcases the technology and inspiration for the KickSat:

Further: Breakthrough Initiatives