James Webb Makes The Journey From Houston To Los Angeles; Last Stop Before It Heads To The Launch Facility In 2019

A look inside the cavernous cargo hold of the C5 aircraft that carried the James Webb to California. Image: NASA/Chris Gunn

The two halves of the James Webb Space Telescope are now in the same location and ready to take the next step on JWST’s journey. On February 2nd, Webb’s Optical Telescope and Integrated Science instrument module (OTIS) arrived at Northrop Grumman Aerospace Systems in Redondo Beach, California. The integrated spacecraft, consisting of the spacecraft bus and sunshield, were already there, waiting for OTIS so they could join together and become a complete spacecraft.

“The team will begin the final stages of integration of the world’s largest space telescope.” – Scott Willoughby, Northrop Grumman’s Program Manage for the JWST.

“It’s exciting to have both halves of the Webb observatory – OTIS and the integrated spacecraft element – here at our campus,” said Scott Willoughby, vice president and program manager for Webb at Northrop Grumman. “The team will begin the final stages of integration of the world’s largest space telescope.”

The Space Telescope for Air, Road, and Sea (STTARS) is a custom-designed container that holds the James Webb’s Optical Telescope and Integrated Science (OTIS) instrument module. In this image its being unloaded from a U.S. military C-5 Charlie aircraft at Los Angeles International Airport (LAX) on Feb. 2, 2018. Image: NASA/Chris Gunn

OTIS arrived from the Johnson Space Center in Houston, where it had successfully completed its cryogenic testing. To prepare for that journey, OTIS was placed inside a custom shipping container designed to protect the delicate and expensive Webb Telescope from any damage. That specially designed container is called the Space Telescope Transporter for Air, Road and Sea (STTARS).

STTARS is a massive container, measuring 4.6 meters (15 feet) wide, 5.2 meters (17 feet) tall, and 33.5 meters feet (110) long, and weighing approximately 75,000 kilograms (almost 165,000 pounds). It’s much larger than the James Webb itself, but even then, the primary mirror wings and the secondary mirror tripod must be folded into flight configuration in order to fit.

The Space Telescope Transporter for Air, Road and Sea (STTARS) NASA’s at Johnson Space Center in Houston. Image: NASA/Chris Gunn

The next step for the JWST is to join the spacecraft itself with OTIS. Once that happens, JWST will be complete and fully integrated. Then there’ll be more tests called observatory-level testing. After that, another journey inside STTARS to Kouru, French Guiana, where the JWST will be launched in 2019.

“This is a major milestone.” – Eric Smith, director of the James Webb Space Telescope Program at NASA.

“This is a major milestone,” said Eric Smith, director of the James Webb Space Telescope Program at NASA. “The Webb observatory, which is the work of thousands of scientists and engineers across the globe, will be carefully tested to ensure it is ready to launch and enable scientists to seek the first luminous objects in the universe and search for signs of habitable planets.”

You can’t fault people, either NASA personnel or the rest of us, for getting excited about each development in the James Webb Space Telescope story. Every time the thing twitches or moves, our excitement re-spawns. It seems like everything that happens with the JWST is now a milestone in its long, uncertain journey. It’s easy to see why.

The Space Telescope That Almost Wasn’t

The James Webb ran into a lot of problems during its development. As can be expected for a ground-breaking, technology-pushing project like the Webb, it’s expensive. In 2011, when the project was well underway, it was revealed that the Webb would cost $8.8 billion, much more than the initial budget of $1.6 billion. The House of Representatives cancelled the project, then restored it, though funding was capped at $8 billion.

That was the main hurdle facing the development of the JWST, but there were others, including timeline delays. The most recent timeline change moved the launch date from 2017 to Spring 2019. As of now, the James Webb is on schedule, and on target to meet its revised budget.

The First “Super Telescope”

The JWST is the first of the “Super Telescopes” to be in operation. Once it’s in place at LaGrange Point 2 (L2), about 1.5 million km (930,000 miles) from Earth, it will begin observing, primarily in infrared. It will surpass both the Hubble Telescope and the Spitzer Telescope, and will “look back in time” to some of oldest stars and galaxies in the universe. It will also examine exoplanets and contribute to the search for life.

Good News For The Search For Life, The Trappist System Might Be Rich In Water

This artist’s impression shows several of the planets orbiting the ultra-cool red dwarf star TRAPPIST-1. New observations and analysis have yielded good estimates of the densities of all seven of the Earth-sized planets and suggest that they are rich in volatile materials, probably water. Image Credit: ESO

When we finally find life somewhere out there beyond Earth, it’ll be at the end of a long search. Life probably won’t announce its presence to us, we’ll have to follow a long chain of clues to find it. Like scientists keep telling us, at the start of that chain of clues is water.

The discovery of the TRAPPIST-1 system last year generated a lot of excitement. 7 planets orbiting the star TRAPPIST-1, only 40 light years from Earth. At the time, astronomers thought at least some of them were Earth-like. But now a new study shows that some of the planets could hold more water than Earth. About 250 times more.

Continue reading “Good News For The Search For Life, The Trappist System Might Be Rich In Water”

SpaceX Performs an Experimental High Retrothrust and Survives a Water Landing

This SpaceX rocket was performing a very high retro-thrust landing in water. It wasn't expected to survive, but did. Image: SpaceX

SpaceX’s most recent rocket launch saw the Falcon 9 perform a high retro-thrust over water, with no drone ship in sight. SpaceX never intended to reuse this rocket, and they haven’t said exactly why.

This launch was conducted on January 31st, and the payload was a communications satellite called GovSat-1. It’s a public-private partnership, and GovSat-1 is a heavy satellite which was placed into a particularly high orbit. It will be used by the government of Luxembourg, and by a private European company called SES. It’ll provide secure communications and surveillance for the military, and it has anti-jamming features to help it resist attack.

A high orbit and a heavy payload means that the Falcon 9 that launched it might not have had enough fuel for its customary drone landing. But other Falcon 9s have launched payloads this high and landed on droneships for reuse. So what gives?

According to SpaceX, they never planned to land and reuse this one. They didn’t exactly say why they did it this way, but it’s been speculated that this one was an older iteration of the Falcon 9 known as the Block3. This is the second time SpaceX flew a Block 3 iteration without trying to reuse it. The first time they launched one without reusing it, it carried 10 Iridium satellites into low-Earth orbit.

The Falcon 9 is flying in Block 4 configuration now, with Block 5 coming in the near future. SpaceX says that the Falcon 9 Block 5 will improve the performance and the reusability of the rocket in the future. They’ve also stated that the Block 5 will be the final configuration. Maybe they let this one land in the ocean because it’s just not needed anymore.

SpaceX’s reusable rocketry technology is their primary development. The main booster of their Falcon 9 can be reconditioned and used again and again, keeping costs down. After lift-off, and after the primary stage is released, the main-stage booster lands on a SpaceX drone ship, where it is secured and delivered to shore to be reused.

In this case, SpaceX wanted to test a high retro-thrust landing. The test consisted of three separate burns performed over water, rather than on a drone ship, to avoid damaging the ship. The rocket itself wasn’t expected to survive, but did. Or it partly survived, anyway. As Elon Musk confirmed in his tweet:

The retro-thrust rockets on SpaceX rockets like the Falcon 9 allow the rocket to land softly. They thrust in the opposite direction the rocket is landing, and cushion the Falcon 9’s landing on the droneship.

With the successful static test of SpaceX’s Falcon Heavy last week, a first launch for the Heavy is in sight. Testing high retro-thrust landings could be related to the upcoming first launch, even though, as Elon Musk said, merely getting the Falcon Heavy off the pad and back would constitute a successful first flight. But that’s just a guess.

The Falcon Heavy is designed to be reusable, just like its little brother, the Falcon 9. Reusability is key to SpaceX and is the whole reason Musk started the company: to make spaceflight more affordable, and to help humanity travel beyond the Moon.

SpaceX plans to tow this Falcon 9 back to shore and see if it can be salvaged. But after being dunked in salt water, any meaningful salvage seems unlikely. Who knows. Maybe Elon Musk will use it for flame-thrower target practice.

But the fate of this single rocket isn’t really that important in the grand scheme of things. What’s important is that SpaceX is still testing designs, and still pushing the boundaries of lower-cost spaceflight.

With that in mind, here’s hoping the whiz kids at SpaceX can destroy a few more rockets. After all, it’s all in the name of science.

The First Results From The IllustrisTNG Simulation Of The Universe Has Been Completed, Showing How Our Cosmos Evolved From The Big Bang

IllustrisTNG is a new simulation model for the Universe. It used over 24,000 processors over the course of more than two months to produce the largest hydrodynamic simulation project to date for the emergence of cosmic structures. Image: IllustrisTNG

The first results of the IllustrisTNG Project have been published in three separate studies, and they’re shedding new light on how black holes shape the cosmos, and how galaxies form and grow. The IllustrisTNG Project bills itself as “The next generation of cosmological hydrodynamical simulations.” The Project is an ongoing series of massive hydrodynamic simulations of our Universe. Its goal is to understand the physical processes that drive the formation of galaxies.

At the heart of IllustriousTNG is a state of the art numerical model of the Universe, running on one of the most powerful supercomputers in the world: the Hazel Hen machine at the High-Performance Computing Center in Stuttgart, Germany. Hazel Hen is Germany’s fastest computer, and the 19th fastest in the world.

The Hazel Hen Supercomputer is based on Intel processors and Cray network technologies. Image: IllustrisTNG

Our current cosmological model suggests that the mass-energy density of the Universe is dominated by dark matter and dark energy. Since we can’t observe either of those things, the only way to test this model is to be able to make precise predictions about the structure of the things we can see, such as stars, diffuse gas, and accreting black holes. These visible things are organized into a cosmic web of sheets, filaments, and voids. Inside these are galaxies, which are the basic units of cosmic structure. To test our ideas about galactic structure, we have to make detailed and realistic simulated galaxies, then compare them to what’s real.

Astrophysicists in the USA and Germany used IllustrisTNG to create their own universe, which could then be studied in detail. IllustrisTNG correlates very strongly with observations of the real Universe, but allows scientists to look at things that are obscured in our own Universe. This has led to some very interesting results so far, and is helping to answer some big questions in cosmology and astrophysics.

How Do Black Holes Affect Galaxies?

Ever since we’ve learned that galaxies host supermassive black holes (SMBHs) at their centers, it’s been widely believed that they have a profound influence on the evolution of galaxies, and possibly on their formation. That’s led to the obvious question: How do these SMBHs influence the galaxies that host them? Illustrious TNG set out to answer this, and the paper by Dr. Dylan Nelson at the Max Planck Institute for Astrophysics shows that “the primary driver of galaxy color transition is supermassive blackhole feedback in its low-accretion state.”

“The only physical entity capable of extinguishing the star formation in our large elliptical galaxies are the supermassive black holes at their centers.” – Dr. Dylan Nelson, Max Planck Institute for Astrophysics,

Galaxies that are still in their star-forming phase shine brightly in the blue light of their young stars. Then something changes and the star formation ends. After that, the galaxy is dominated by older, red stars, and the galaxy joins a graveyard full of “red and dead” galaxies. As Nelson explains, “The only physical entity capable of extinguishing the star formation in our large elliptical galaxies are the supermassive black holes at their centers.” But how do they do that?

Nelson and his colleagues attribute it to supermassive black hole feedback in its low-accretion state. What that means is that as a black hole feeds, it creates a wind, or shock wave, that blows star-forming gas and dust out of the galaxy. This limits the future formation of stars. The existing stars age and turn red, and few new blue stars form.

This is a rendering of gas velocity in a massive galaxy cluster in IllustrisTNG. Black areas are hardly moving, and white areas are moving at greater than 1000km/second. The black areas are calm cosmic filaments, the white areas are near super-massive black holes (SMBHs). The SMBHs are blowing away the gas and preventing star formation. Image: IllustrisTNG

How Do Galaxies Form and How Does Their Structure Develop?

It’s long been thought that large galaxies form when smaller galaxies join up. As the galaxy grows larger, its gravity draws more smaller galaxies into it. During these collisions, galaxies are torn apart. Some stars will be scattered, and will take up residence in a halo around the new, larger galaxy. This should give the newly-created galaxy a faint background glow of stellar light. But this is a prediction, and these pale glows are very hard to observe.

“Our predictions can now be systematically checked by observers.” – Dr. Annalisa Pillepich (Max Planck Institute for Astrophysics)

IllustrisTNG was able to predict more accurately what this glow should look like. This gives astronomers a better idea of what to look for when they try to observe this pale stellar glow in the real Universe. “Our predictions can now be systematically checked by observers,” Dr. Annalisa Pillepich (MPIA) points out, who led a further IllustrisTNG study. “This yields a critical test for the theoretical model of hierarchical galaxy formation.”

A composite image from IllustrisTNG. Panels on the left show galaxy-galaxy interactions and the fine-grained structure of extended stellar halos. Panels on the right show stellar light projections from two massive central galaxies at the present day. It’s easy to see how the light from massive central galaxies overwhelms the light from stellar halos. Image: IllustrisTNG

IllustrisTNG is an on-going series of simulations. So far, there have been three IllustrisTNG runs, each one creating a larger simulation than the previous one. They are TNG 50, TNG 100, and TNG 300. TNG300 is much larger than TNG50 and allows a larger area to be studied which reveals clues about large-scale structure. Though TNG50 is much smaller, it has much more precise detail. It gives us a more detailed look at the structural properties of galaxies and the detailed structure of gas around galaxies. TNG100 is somewhere in the middle.

TNG 50, TNG 100, and TNG 300. Image: IllustrisTNG

IllustrisTNG is not the first cosmological hydrodynamical simulation. Others include Eagle, Horizon-AGN, and IllustrisTNG’s predecessor, Illustris. They have shown how powerful these predictive theoretical models can be. As our computers grow more powerful and our understanding of physics and cosmology grow along with them, these types of simulations will yield greater and more detailed results.

Now That NASA’s Missing IMAGE Satellite Has Been Found, Talking To It Is Going To Be Difficult

This picture shows NASA's IMAGE spacecraft undergoing launch preparations in early 2000. Credit: NASA

It’s easy to imagine the excitement NASA personnel must have felt when an amateur astronomer contacted NASA to tell them that he might have found their missing IMAGE satellite. After all, the satellite had been missing for 10 years.

IMAGE, which stands for Imager for Magnetopause-to-Aurora Global Exploration, was launched on March 25th, 2000. In Dec. 2005 the satellite failed to make routine contact, and in 2007 it failed to reboot. After that, the mission was declared over.

NASA’s IMAGE satellite. Credit: NASA

It’s astonishing that after 10 years, the satellite has been found. It’s even more astonishing that it was an amateur who found it. As if the story couldn’t get any more interesting, the amateur astronomer who found it—Scott Tilly of British Columbia, Canada—was actually looking for a different missing satellite: the secret ZUMA spy satellite launched by the US government on January 7, 2018. (If you’re prone to wearing a tin foil hat, now might be a good time to reach for one.)

NASA’s half-ton IMAGE satellite being launched from Vandenberg Air Force Base on March 25th, 2000. IMAGE was the first satellite designed to actually “see” most of the major charged particle systems in the space surrounding Earth. Image: NASA

After Tilly contacted NASA, they hurried to confirm that it was indeed IMAGE that had been found. To do that, NASA employed 5 separate antennae to seek out any radio signals from the satellite. As of Monday, Jan. 29, signals received from all five sites were consistent with the radio frequency characteristics expected of IMAGE.

In a press release, NASA said, “Specifically, the radio frequency showed a spike at the expected center frequency, as well as side bands where they should be for IMAGE. Oscillation of the signal was also consistent with the last known spin rate for IMAGE.”

“…the radio frequency showed a spike at the expected center frequency…” – NASA Press Release confirming the discovery of IMAGE

Then, on January 30, the Johns Hopkins Applied Physics Lab (JHUAPL) reported that they had successfully collected telemetry data from the satellite. In that signal was the ID code 166, the code for IMAGE. There were probably some pretty happy people at NASA.

So, now what?

A diagram of NASA’s IMAGE satellite. Image: NASA

NASA’s next step is to confirm without a doubt that this is indeed IMAGE. That means capturing and analyzing the data in the signal. That will be a technical challenge, because the types of hardware and operating systems used in the IMAGE Mission Operations Center no longer exist. According to NASA, “other systems have been updated several versions beyond what they were at the time, requiring significant reverse-engineering.” But that should be no problem for NASA. After all, they got Apollo 13 home safely, didn’t they?

If NASA is successful at decoding the data in the signal, the next step is to attempt to turn on IMAGE’s science payload. NASA has yet to decide how to proceed if they’re successful.

IMAGE was the first spacecraft designed to “see the invisible,” as they put it back then. Prior to IMAGE, spacecraft examined Earth’s magnetosphere by detecting particles and fields they encountered as they passed through them. But this method had limited success. The magnetosphere is enormous, and simply sampling a small path—while better than nothing—did not give us an accurate understanding of it.

During its mission, IMAGE did a lot of great science. In July 2000, a spectacular solar storm caused auroras as far south as Mexico. IMAGE captured these images of those poweful auroras. Credit: NASA

IMAGE was going to do things differently. It used 3-dimensional imaging techniques to measure simultaneously the densities, energies and masses of charged particles throughout the inner magnetosphere. To do this, IMAGE carried a payload of 7 instruments:

  • High Energy Neutral Atom (HENA) imager
  • Medium Energy Neutral Atom (MENA) imager
  • Low Energy Neutral Atom (LENA) imager
  • Extreme Ultraviolet (EUV) imager
  • Far Ultraviolet (FUV) imager
  • Radio Plasma Imager (RPI)
  • Central Instrument Data Processor (CIDP)

These instruments allowed IMAGE to not only do great science, and to capture great images, but also to create some stunning never-seen-before movies of auroral activity.

This is a fascinating story, and it’ll be interesting to see if NASA can establish meaningful contact with IMAGE. Will it have a treasure trove of unexplored data on-board? Can it be re-booted and brought back into service? We’ll have to wait and see.

This story is also interesting culturally. IMAGE was in service at a time when the internet wasn’t as refined as it is currently. NASA has mastered the internet and public communications now, but back then? Not so much. For example, to build up interest around the mission, NASA gave IMAGE its own theme song, titled “To See The Invisible.” Yes, seriously.

But that’s just a side-note. IMAGE was all about great science, and it accomplished a lot. You can read all about IMAGE’s science achievements here.

Microbes May Help Astronauts Turn Human Waste Into Food

Researchers at Penn State University are developing a way to use microbes to turn human waste into food on long space voyages. Image: Yuri Gorby, Rensselaer Polytechnic Institute
Microbes play a critical role on Earth. Understanding how they react to space travel is crucial to ensuring astronaut health. Credit: Yuri Gorby, Rensselaer Polytechnic Institute

Geoscience researchers at Penn State University are finally figuring out what organic farmers have always known: digestive waste can help produce food. But whereas farmers here on Earth can let microbes in the soil turn waste into fertilizer, which can then be used to grow food crops, the Penn State researchers have to take a different route. They are trying to figure out how to let microbes turn waste directly into food.

There are many difficulties with long-duration space missions, or with lengthy missions to other worlds like Mars. One of the most challenging difficulties is how to take enough food. Food for a crew of astronauts on a 6-month voyage to Mars, and enough for a return trip, weighs a lot. And all that weight has to be lifted into space by expensive rockets.

SpaceX's reusable rockets are bringing down the cost of launching things into space, but the cost is still prohibitive. Any weight savings contribute to a missions feasibility, including a reduction in food supplies for long space journeys. In this image, a SpaceX Falcon 9 recycled rocket lifts off at sunset at 6:53 PM EDT on 11 Oct 2017.  Credit: Ken Kremer/Kenkremer.com
SpaceX’s reusable rockets are bringing down the cost of launching things into space, but the cost is still prohibitive. Any weight savings contribute to a missions feasibility, including a reduction in food supplies for long space journeys. In this image, a SpaceX Falcon 9 recycled rocket lifts off at sunset at 6:53 PM EDT on 11 Oct 2017. Credit: Ken Kremer/Kenkremer.com

Carrying enough food for a long voyage in space is problematic. Up until now, the solution for providing that food has been focused on growing it in hydroponic chambers and greenhouses. But that also takes lots of space, water, and energy. And time. It’s not really a solution.

“It’s faster than growing tomatoes or potatoes.” – Christopher House, Penn State Professor of Geosciences

What the researchers at Penn State, led by Professor of Geosciences Christopher House, are trying to develop, is a method of turning waste directly into an edible, nutritious substance. Their aim is to cut out the middle man, as it were. And in this case, the middle men are plants themselves, like tomatoes, potatoes, or other fruits and vegetables.

We've always assumed that astronauts working on Mars would feed themselves by growing Earthly crops in simulated Earth conditions. But that requires a lot of energy, space, and materials. It may not be necessary. An artist's illustration of a greenhouse on Mars. Image Credit: SAIC
We’ve always assumed that astronauts working on Mars would feed themselves by growing Earthly crops in simulated Earth conditions. But that requires a lot of energy, space, and materials. It may not be necessary. An artist’s illustration of a greenhouse on Mars. Image Credit: SAIC

“We envisioned and tested the concept of simultaneously treating astronauts’ waste with microbes while producing a biomass that is edible either directly or indirectly depending on safety concerns,” said Christopher House, professor of geosciences, Penn State. “It’s a little strange, but the concept would be a little bit like Marmite or Vegemite where you’re eating a smear of ‘microbial goo.'”

The Penn State team propose to use specific microorganisms to turn waste directly into edible biomass. And they’re making progress.

At the heart of their work are things called microbial reactors. Microbial reactors are basically vessels designed to maximize surface area for microbes to populate. These types of reactors are used to treat sewage here on Earth, but not to produce an edible biomass.

“It’s a little strange, but the concept would be a little bit like Marmite or Vegemite where you’re eating a smear of ‘microbial goo.'” – Christopher House, Penn State Professor of Geosciences

To test their ideas, the researchers constructed a cylindrical vessel four feet long by four inches in diameter. Inside it, they allowed select microorganisms to come into contact with human waste in controlled conditions. The process was anaerobic, and similar to what happens inside the human digestive tract. What they found was promising.

“Anaerobic digestion is something we use frequently on Earth for treating waste,” said House. “It’s an efficient way of getting mass treated and recycled. What was novel about our work was taking the nutrients out of that stream and intentionally putting them into a microbial reactor to grow food.”

One thing the team discovered is that the process readily produces methane. Methane is highly flammable, so very dangerous on a space mission, but it has other desirable properties when used in food production. It turns out that methane can be used to grow another microbe, called Methylococcus capsulatus. Methylococcus capsulatus is used as an animal food. Their conclusion is that the process could produce a nutritious food for astronauts that is 52 percent protein and 36 percent fats.

“We used materials from the commercial aquarium industry but adapted them for methane production.” – Christopher House, Penn State Professor of Geosciences

The process isn’t simple. The anaerobic process involved can produce pathogens very dangerous to people. To prevent that, the team studied ways to grow microbes in either an alkaline environment or a high-heat environment. After raising the system pH to 11, they found a strain of the bacteria Halomonas desiderata that thrived. Halomonas desiderata is 15 percent protein and 7 percent fats. They also cranked the system up to a pathogen-killing 158 degrees Fahrenheit, and found that the edible Thermus aquaticus grew, which is 61 percent protein and 16 percent fats.

Conventional waste treatment plants, like this one in England, take several days to treat waste. The anerobic system tested by the Penn State team treated waste in as little as 13 hours. Image: Nick Allen, CC BY-SA 4.0

Their system is based on modern aquarium systems, where microbes live on the surface of a filter film. The microbes take solid waste from the stream and convert it to fatty acids. Then, those fatty acids are converted to methane by other microbes on the same surface.

Speed is a factor in this system. Existing waste management treatment typically takes several days. The team’s system removed 49 to 59 percent of solids in 13 hours.

This system won’t be in space any time soon. The tests were conducted on individual components, as proof of feasibility. A complete system that functioned together still has to be built. “Each component is quite robust and fast and breaks down waste quickly,” said House. “That’s why this might have potential for future space flight. It’s faster than growing tomatoes or potatoes.”

The team’s paper was published here, in the journal Life Sciences In Space Research.

The Most Detailed Map Ever Made of the Milky Way in Radio Waves

The FUGIN project used the 45 meter Nobeyama radio telescope in Japan to produce the most detailed radio wave map yet of the Milky Way. Image: NAOJ/NASA/JPL-Caltech
The FUGIN project used the 45 meter Nobeyama radio telescope in Japan to produce the most detailed radio wave map yet of the Milky Way. Top: Three color (false color) radio map of the Milky Way (l=10-50 deg) obtained by the FUGIN Project. Red, green, and blue represent the radio intensities of 12CO, 13CO, and C18O, respectively. Second Line: Infrared image of the same region obtained by the Spitzer Space Telescope. Red, green, and blue represent the intensities of 24?m, 8?m, and 5.8?m radio waves respectively. Top Zoom-In: Three color radio map of the Milky Way (l=12-22 deg) obtained by the FUGIN Project. The colors are the same as the top image. Lower-Left Zoom-In: Enlarged view of the W51 region. The colors are the same as the top image.Lower-Right Zoom-In: Enlarged view of the M17 region. The colors are the same as the top image. Image: NAOJ/NASA/JPL-Caltech

A Japanese telescope has produced our most detailed radio wave image yet of the Milky Way galaxy. Over a 3-year time period, the Nobeyama 45 meter telescope observed the Milky Way for 1100 hours to produce the map. The image is part of a project called FUGIN (FOREST Unbiased Galactic plane Imaging survey with the Nobeyama 45-m telescope.) The multi-institutional research group behind FUGIN explained the project in the Publications of the Astronomical Society of Japan and at arXiv.

The Nobeyama 45 meter telescope is located at the Nobeyama Radio Observatory, near Minamimaki, Japan. The telescope has been in operation there since 1982, and has made many contributions to millimeter-wave radio astronomy in its life. This map was made using the new FOREST receiver installed on the telescope.

When we look up at the Milky Way, an abundance of stars and gas and dust is visible. But there are also dark spots, which look like voids. But they’re not voids; they’re cold clouds of molecular gas that don’t emit visible light. To see what’s happening in these dark clouds requires radio telescopes like the Nobeyama.

The Nobeyama 45m radio telescope at the Nobeyama Radio Observatory in Japan. Image:NAOJ
The Nobeyama 45m radio telescope at the Nobeyama Radio Observatory in Japan. Image:NAOJ

The Nobeyama was the largest millimeter-wave radio telescope in the world when it began operation, and it has always had great resolution. But the new FOREST receiver has improved the telescope’s spatial resolution ten-fold. The increased power of the new receiver allowed astronomers to create this new map.

The new map covers an area of the night sky as wide as 520 full Moons. The detail of this new map will allow astronomers to study both large-scale and small-scale structures in new detail. FUGIN will provide new data on large structures like the spiral arms—and even the entire Milky Way itself—down to smaller structures like individual molecular cloud cores.

FUGIN is one of the legacy projects for the Nobeyama. These projects are designed to collect fundamental data for next-generation studies. To collect this data, FUGIN observed an area covering 130 square degrees, which is over 80% of the area between galactic latitudes -1 and +1 degrees and galactic longitudes from 10 to 50 degrees and from 198 to 236 degrees. Basically, the map tried to cover the 1st and 3rd quadrants of the galaxy, to capture the spiral arms, bar structure, and the molecular gas ring.

Starscape photograph taken at Nobeyama Radio Observatory by Norikazu Okabe. The FUGIN observation region (l=10-50 deg) is marked. Credit: National Astronomical Observatory of Japan
Starscape photograph taken at Nobeyama Radio Observatory by Norikazu Okabe. The FUGIN observation region (l=10-50 deg) is marked. Credit: National Astronomical Observatory of Japan

The aim of FUGIN is to investigate physical properties of diffuse and dense molecular gas in the galaxy. It does this by simultaneously gathering data on three carbon dioxide isotopes: 2CO, 13CO, and 18CO. Researchers were able to study the distribution and the motion of the gas, and also the physical characteristics like temperature and density. And the studying has already paid off.

FUGIN has already revealed things previously hidden. They include entangled filaments that weren’t obvious in previous surveys, as well as both wide-field and detailed structures of molecular clouds. Large scale kinematics of molecular gas such as spiral arms were also observed.

An artist’s image showing the major features of the Milky Way galaxy. Credit: NASA/JPL-Caltech, ESO, J. Hurt

But the main purpose is to provide a rich data-set for future work by other telescopes. These include other radio telescopes like ALMA, but also telescopes operating in the infrared and other wavelengths. This will begin once the FUGIN data is released in June, 2018.

Millimeter wave radio astronomy is powerful because it can “see” things in space that other telescopes can’t. It’s especially useful for studying the large, cold gas clouds where stars form. These clouds are as cold as -262C (-440F.) At temperatures that low, optical scopes can’t see them, unless a bright star is shining behind them.

Even at these extremely low temperatures, there are chemical reactions occurring. This produces molecules like carbon monoxide, which was a focus of the FUGIN project, but also others like formaldehyde, ethyl alcohol, and methyl alcohol. These molecules emit radio waves in the millimeter range, which radio telescopes like the Nobeyama can detect.

The top-level purpose of the FUGIN project, according to the team behind the project, is to “provide crucial information about the transition from atomic gas to molecular gas, formation of molecular clouds and dense gas, interaction between star-forming regions and interstellar gas, and so on. We will also investigate the variation of physical properties and internal structures of molecular clouds in various environments, such as arm/interarm and bar, and evolutionary stage, for example, measured by star-forming activity.”

This new map from the Nobeyama holds a lot of promise. A rich data-set like this will be an important piece of the galactic puzzle for years to come. The details revealed in the map will help astronomers tease out more detail on the structures of gas clouds, how they interact with other structures, and how stars form from these clouds.

NASA’s Insight Lander Spreads Its Solar Wings. It’ll Fly To Mars In May, 2018

The Insight lander responds to commands to spread its solar arrays during a January 23, 2018 test at the Lockheed Martin clean room in Littleton, Colorado. Image: Lockheed Martin Space
The Insight lander responds to commands to spread its solar arrays during a January 23, 2018 test at the Lockheed Martin clean room in Littleton, Colorado. Image: Lockheed Martin Space

May 2018 is the launch window for NASA’s next mission to Mars, the InSight Lander. InSight is the next member of what could be called a fleet of human vehicles destined for Mars. But rather than working on the question of Martian habitability or suitability for life, InSight will try to understand the deeper structure of Mars.

InSight stands for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport. InSight will be the first robotic explorer to visit Mars and study the red planet’s deep interior. The work InSight does should answer questions about the formation of Mars, and those answers may apply to the history of the other rocky planets in the Solar System. The lander, (InSight is not a rover) will also measure meteorite impacts and tectonic activity happening on Mars currently.

This video helps explain why Mars is a good candidate to answer questions about how all our rocky planets formed, not just Mars itself.

InSight was conceived as part of NASA’s Discovery Program, which are missions focused on important questions all related to the “content, origin, and evolution of the solar system and the potential for life elsewhere”, according to NASA. Understanding how our Solar System and its planets formed is a key part of the Discovery Program, and is the question InSight was built to answer.

This artist's illustration of InSight on a photo background of Mars shows the lander fully deployed. The solar arrays are open, and in the foreground two of its instruments are shown. On the left is the SEIS instrument, and on the right is the HP3 probe. Image: NASA/Lockheed Martin
This artist’s illustration of InSight on a photo background of Mars shows the lander fully deployed. The solar arrays are open, and in the foreground two of its instruments are shown. On the left is the SEIS instrument, and on the right is the HP3 probe. Image: NASA/Lockheed Martin

To do its work, InSight will deploy three instruments: SEIS, HP³, and RISE.

SEIS

This is InSight’s seismic instrument, designed to take the Martian pulse. It stands for Seismic Experiment for Internal Structure.

In this image, InSight's Instrument Deployment Arm is practicing placing SEIS on the surface. Image: NASA/Lockheed Martin
In this image, InSight’s Instrument Deployment Arm is practicing placing SEIS on the surface. Image: NASA/Lockheed Martin

SEIS sits patiently under its dome, which protects it from Martian wind and thermal effects, and waits for something to happen. What’s it waiting for? For seismic waves caused by Marsquakes, meteorite impacts, or by the churning of magma deep in the Martian interior. These waves will help scientists understand the nature of the material that first formed Mars and the other rocky planets.

HP³

HP³ is InSight’s heat probe. It stands for Heat Flow and Physical Properties Probe. Upon deployment on the Martian surface, HP³ will burrow 5 meters (16 ft.) into Mars. No other instrument has ever pierced Mars this deeply. Once there, it will measure the heat flowing deeply within Mars.

In this image, the Heat Flow and Physical Properties Probe is shown inserted into Mars. Image: NASA
In this image, the Heat Flow and Physical Properties Probe is shown inserted into Mars. Image: NASA

Scientists hope that the heat measured by HP³ will help them understand whether or not Mars formed from the same material that Earth and the Moon formed from. It should also help them understand how Mars evolved after it was formed.

RISE

RISE stands for Rotation and Interior Structure Experiment. RISE will measure the Martian wobble as it orbits the Sun, by precisely tracking InSight’s position on the surface. This will tell scientists a lot about the deep inner core of Mars. The idea is to determine the depth at which the Martian core is solid. It will also tell us which elements are present in the core. Basically, RISE will tell us how Mars responds to the Sun’s gravity as it orbits the Sun. RISE consists of two antennae on top of InSight.

The two RISE antennae are shown in this image. RISE will reveal information about the Martian core by tracking InSight's position while Mars orbits the Sun. Image: NASA/Lockheed Martin
The two RISE antennae are shown in this image. RISE will reveal information about the Martian core by tracking InSight’s position while Mars orbits the Sun. Image: NASA/Lockheed Martin

InSight will land at Elysium Planitia which is a flat and smooth plain just north of the Martian equator. This is considered a perfect location or InSight to study the Martian interior. The landing sight is not far from where Curiosity landed at Gale Crater in 2012.

InSight will land at Elysium Planitia, just north of the Martian equator. Image: NASA/JPL-CalTech
InSight will land at Elysium Planitia, just north of the Martian equator. Image: NASA/JPL-CalTech

InSight will be launched to Mars from Vandenberg Air Force Base in California by an Atlas V-401 rocket. The trip to Mars will take about 6 months. Once on the Martian surface, InSight’s mission will have a duration of about 728 Earth days, or just over 1 Martian year.

InSight won’t be launching alone. The Atlas that launches the lander will also launch another NASA technology experiment. MarCO, or Mars Cube One, is two suitcase-size CubeSats that will travel to Mars behind InSight. Once in orbit around Mars, their job is to relay InSight data as the lander enters the Martian atmosphere and lands. This will be the first time that miniaturized CubeSat technology will be tested at another planet.

One of the MarCO Cubesats that will be launched with InSight. This will be the first time that CubeSat technology will be tested at another planet. Image: NASA/JPL-CalTech
One of the MarCO Cubesats that will be launched with InSight. This will be the first time that CubeSat technology will be tested at another planet. Image: NASA/JPL-CalTech

If the MarCO experiment is successful, it could be a new way of relaying mission data to Earth. MarCO will relay news of a successful landing, or of any problems, much sooner. However, the success of the InSight lander is not dependent on a successful MarCO experiment.

Where’s the Line Between Massive Planet and Brown Dwarf Star?

This artist's conception illustrates the brown dwarf named 2MASSJ22282889-431026, observed by NASA's Hubble and Spitzer space telescopes. Brown dwarfs are more massive and hotter than planets but lack the mass required to become stars. Image credit: NASA
This artist's conception illustrates the brown dwarf named 2MASSJ22282889-431026, observed by NASA's Hubble and Spitzer space telescopes. Brown dwarfs are more massive and hotter than planets but lack the mass required to become stars. Image credit: NASA

When is a Brown Dwarf star not a star at all, but only a mere Gas Giant? And when is a Gas Giant not a planet, but a celestial object more akin to a Brown Dwarf? These questions have bugged astronomers for years, and they go to the heart of a new definition for the large celestial bodies that populate solar systems.

An astronomer at Johns Hopkins University thinks he has a better way of classifying these objects, and it’s not based only on mass, but on the company the objects keep, and how the objects formed. In a paper published in the Astrophysical Journal, Kevin Schlaufman made his case for a new system of classification that could helps us all get past some of the arguments about which object is a gas giant planet or a brown dwarf. Mass is the easy-to-understand part of this new definition, but it’s not the only factor. How the object formed is also key.

In general, the less massive a star, the cooler it is. Though stars smaller than our Sun can still sustain heat-producing fusion reactions, protostars that are too small cannot. These “failed” stars are commonly known as brown dwarfs, and a new definition puts their range from between 10-75 times the mass of Jupiter. This artist’s concept compares the size of a brown dwarf to that of Earth, Jupiter, a low-mass star, and the Sun. (Credit: NASA/JPL-Caltech/UCB).
In general, the less massive a star, the cooler it is. Though stars smaller than our Sun can still sustain heat-producing fusion reactions, protostars that are too small cannot. These “failed” stars are commonly known as brown dwarfs, and a new definition puts their range from between 10-75 times the mass of Jupiter. This artist’s concept compares the size of a brown dwarf to that of Earth, Jupiter, a low-mass star, and the Sun. (Credit: NASA/JPL-Caltech/UCB).

Schlaufman is an assistant professor in the Johns Hopkins Department of Physics and Astronomy. He has set a limit for what we should call a planet, and that limit is between 4 and 10 times the mass of our Solar System’s biggest planet, Jupiter. Above that, you’ve got yourself a Brown Dwarf star. (Brown Dwarfs are also called sub-stellar objects, or failed stars, because they never grew massive enough to become stars.)

“An upper boundary on the masses of planets is one of the most prominent details that was missing.” – Kevin Schlaufman, Johns Hopkins University, Dept. of Physics and Astronomy.

Improvements in observing other solar systems have led to this new definition. Where previously we only had our own Solar System as reference, we now can observe other solar systems with increasing effectiveness. Schlaufman observed 146 solar systems, and that allowed him to fill in some of the blanks in our understanding of brown dwarf and planet formation.

An image of Jupiter showing its storm systems. According to a new definition, Jupiter would be considered a brown dwarf if it had grown to over 10 times its mass when it was formed. Image: Gemini
An image of Jupiter showing its storm systems. According to a new definition, Jupiter would be considered a brown dwarf if it had grown to over 10 times its mass when it was formed. Image: Gemini

“While we think we know how planets form in a big picture sense, there’s still a lot of detail we need to fill in,” Schlaufman said. “An upper boundary on the masses of planets is one of the most prominent details that was missing.”

Let’s back up a bit and look at how Brown Dwarfs and Gas Giants are related.

Solar systems are formed from clouds of gas and dust. In the early days of a solar system, one or more stars are formed out of this cloud by gravitational collapse. They ignite with fusion and become the stars we see everywhere in the Universe. The leftover gas and dust forms into planets, or brown dwarfs. This is a simplified version of solar system formation, but it serves our purposes.

In our own Solar System, only a single star formed: the Sun. The gas giants Jupiter and Saturn gobbled up most of the rest of the material. Jupiter gobbled up the lion’s share, making it the largest planet. But what if conditions had been different and Jupiter had kept growing? According to Schlaufman, if it had kept growing to over 10 times the size it is now, it would have become a brown dwarf. But that’s not where the new definition ends.

Metallicity and Chemical Makeup

Mass is only part of it. What’s really behind his new classification is the way in which the object formed. This involves the concept of metallicity in stars.

Stars have a metallicity content. In astrophysics, this means the fraction of a star’s mass that is not hydrogen or helium. So any element from lithium on down is considered a metal. These metals are what rocky planets form from. The early Universe had only hydrogen and helium, and almost insignificant amounts of the next two elements, lithium and beryllium. So the first stars had no metallicity, or almost none.

This is an image of M80, an ancient globular cluster of stars. Since these stars formed in the early universe, their metallicity content is very low. This means that gas giants like Jupiter would be rare or non-existent here, while brown dwarfs are likely plentiful. Image: By NASA, The Hubble Heritage Team, STScI, AURA - Great Images in NASA Description, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6449278
This is an image of M80, an ancient globular cluster of stars. Since these stars formed in the early universe, their metallicity content is very low. This means that gas giants like Jupiter would be rare or non-existent here, while brown dwarfs are likely plentiful. Image: By NASA, The Hubble Heritage Team, STScI, AURA – Great Images in NASA Description, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6449278

But now, 13.5 billion years after the Big Bang, younger stars like our Sun have more metal in them. That’s because generations of stars have lived and died, and created the metals taken up in subsequent star formation. Our own Sun was formed about 5 billion years ago, and it has the metallicity we expect from a star with its birthdate. It’s still overwhelmingly made of hydrogen and helium, but about 2% of its mass is made of other elements, mostly oxygen, carbon, neon, and iron.

This is where Schlaufman’s study comes in. According to him, we can distinguish between gas giants like Jupiter, and brown dwarfs, by the nature of the star they orbit. The types of planets that form around stars mirror the metallicity of the star itself. Gas giants like Jupiter are usually found orbiting stars with metallicity equal to or greater than our Sun. But brown dwarfs aren’t picky; they form around almost any star. Why?

Brown Dwarfs and Planets Form Differently

Planets like Jupiter are formed by accretion. A rocky core forms, then gas collects around it. Once the process is done, you have a gas giant. For this to happen, you need metals. If metals are present for these rocky cores to form, their presence will be reflected in the metallicity of the host star.

But brown dwarfs aren’t formed by accretion like planets are. They’re formed the same way stars are; by gravitational collapse. They don’t form from an initial rocky core, so metallicity isn’t a factor.

This brings us back to Kevin Schlaufman’s study. He wanted to find out the mass at which point an object doesn’t care about the metallicity of the star they orbit. He concluded that objects above 10 times the mass of Jupiter don’t care if the star has rocky elements, because they don’t form from rocky cores. Hence, they’re not planets akin to Jupiter; they’re brown dwarfs that formed by gravitational collapse.

What Does It Matter What We Call Them?

Let’s look at the Pluto controversy to understand why names are important.

The struggle to accurately classify all the objects we see out there in space is ongoing. Who can forget the plight of poor Pluto? In 2006, the International Astronomical Union (IAU) demoted Pluto, and stripped it of its long-standing status as a planet. Why?

Because the new definition of what a planet is relied on these three criteria:

  • a planet is in orbit around a star.
  • a planet must have sufficient mass to assume a hydrostatic equilibrium (a nearly round shape.)
  • a planet has cleared the neighbourhood around its orbit

The more we looked at Pluto with better telescopes, the more we realized that it did not meet the third criteria, so it was demoted to Dwarf Planet. Sorry Pluto.

Pluto was re-classified as a dwarf planet based on our growing understanding of its nature. Will Schlaufman's new study help us more accurately classify gas giants and brown dwarfs? NASA's New Horizons spacecraft captured this high-resolution enhanced color view of Pluto on July 14, 2015. Credit: NASA/JHUAPL/SwRI
Pluto was re-classified as a dwarf planet based on our growing understanding of its nature. Will Schlaufman’s new study help us more accurately classify gas giants and brown dwarfs? NASA’s New Horizons spacecraft captured this high-resolution enhanced color view of Pluto on July 14, 2015. Credit: NASA/JHUAPL/SwRI

Our naming conventions for astronomical objects are important, because they help people understand how everything fits together. But sometimes the debate over names can get tiresome. (The Pluto debate is starting to wear out its welcome, which is why some suggest we just call them all “worlds.”)

Though the Pluto debate is getting tiresome, it’s still important. We need some way of understanding what makes objects different, and names that reflect that difference. And the names have to reflect something fundamental about the objects in question. Should Pluto really be considered the same type of object as Jupiter? Are both really planets in the same sense? The IAU says no.

The same principle holds true with brown dwarfs and gas giants. Giving them names based solely on their mass doesn’t really tell us much. Schlaufman aims to change that.

His new definition makes sense because it relies on how and where these objects form, not simply their size. But not everyone will agree, of course.

Let the debate begin.

Finally! SpaceX’s Falcon Heavy Does its Static Fire Test. Actual Flight Should Be “In A Week Or So”

The Falcon Heavy Rocket being fired up at launch site LC-39A at NASA’s Kennedy Space Center in Cape Canaveral, Florida. Image: SpaceX
The Falcon Heavy Rocket being fired up at launch site LC-39A at NASA’s Kennedy Space Center in Cape Canaveral, Florida. Image: SpaceX

The long-awaited Static Fire of SpaceX’s Falcon Heavy rocket has been declared a success by SpaceX founder Elon Musk. After this successful test, the first launch of the Falcon Heavy is imminent, with Musk saying in a Tweet, “Falcon Heavy hold-down firing this morning was good. Generated quite a thunderhead of steam. Launching in a week or so.”

This is a significant milestone for the Falcon Heavy, considering that SpaceX initially thought the Heavy’s first flight would be in 2013. The first launch for the Falcon Heavy has always seemed to be tantalizingly out of reach. If space enthusiasts could’ve willed the thing into space, it would’ve launched years ago. But that’s not how it goes.

The Falcon Heavy generated an enormous amount of steam when it fired all 27 of its engines. Image: SpaceX
The Falcon Heavy generated an enormous amount of steam when it fired all 27 of its engines. Image: SpaceX

Developing rockets like the Falcon Heavy is not a simple matter. Even Musk himself admitted this when he said in July, “At first it sounds real easy: you just stick two first stages on as strap-on boosters. But then everything changes. All the loads change; aerodynamics totally change. You’ve tripled the vibration and acoustics.” So it’s not really a surprise that the Falcon Heavy’s development has seen multiple delays.

After first being announced in 2011, the rocket’s first flight was set for 2013. That date came and went, then in 2015 rocket failures postponed the flight. Failures postponed SpaceX again in 2016. New target dates were set for late 2016, then early 2017, then late 2017. But with this successful test, long-suffering space fans can finally breathe a sigh of relief, and their collective sigh will last about as long as the static fire: only a few seconds.

The Falcon Heavy has a total of 27 individual rocket engines, and all 27 of them were fired in this test, though the Heavy never left the launch pad. For those who don’t know, the Falcon Heavy is based on SpaceX’s successful Falcon 9 rocket, a nine-engine machine that made SpaceX the first commercial space company to visit the International Space Station, when the Falcon 9 delivered SpaceX’s Dragon capsule to the ISS in 2012. Since then, the Falcon has a track record of delivering cargo to the ISS and launching satellites into orbit.

The Heavy is like a Falcon 9 with two more 9-engine boosters strapped on. It will be the most powerful rocket in operation, by a large margin. (It won’t be the most powerful rocket in history though. That title still belongs to the Saturn V rocket, last launched in 1973.)

SpaceX Falcon 9 blasts off with KoreaSat-5A comsat from Launch Complex 39A at the Kennedy Space Center, FL, on 30 Oct 2017. The Falcon 9 has one core of 9 Merlin engines. Credit: Jeff Seibert

The Falcon Heavy will create 5 million pounds of thrust at lift-off, and will be able to carry about 140,000 lbs, which is about three times what the Falcon can carry. The Falcon’s engine core is reusable, and returns itself to Earth after detaching from the second stage. The Falcon Heavy will do the same, with all three cores returning to Earth for reuse. The two outer cores will return to the launch pad at Cape Canaveral, and the center core will land on a drone ship in the Atlantic. This is part of the genius behind the SpaceX designs: reusable components keep the cost down.

An artist's illustration of the Falcon Heavy rocket. The Falcon Heavy has 3 engine cores, each one containing 9 Merlin engines. Image: SpaceX
An artist’s illustration of the Falcon Heavy rocket. The Falcon Heavy has 3 engine cores, each one containing 9 Merlin engines. Image: SpaceX

We aren’t exactly sure when the first launch of the Falcon Heavy will be, and its first launch may be a very short flight. It’s possible that it may only get a few feet off the launch pad. At a conference in July, Musk said, “I hope it makes it far enough beyond the pad so that it does not cause pad damage. I would consider even that a win, to be honest.”

We know a few things about the eventual first launch and flight of the Falcon. There won’t be any scientific or commercial payload on-board. Rather, Musk intends to put his own personal Tesla roadster on-board as payload. If successful, it will be the first car to go on a trip around the Sun. (I call Shotgun!) It’s kind of silly to use a rocket to send a car around the Sun, but it will generate publicity. Not only for SpaceX, but for Tesla too.

If the launch is successful, the Falcon Heavy will be open for business. SpaceX already has some customers lined up for the Falcon Heavy, with a Saudi Arabian communications satellite first in line. After that, its second commercial mission will place several satellites in orbit. The US Air Force will be watching these launches closely, with an eye to using the Falcon Heavy for their own purposes.

But the real strength of the Falcon Heavy is not blasting cars on frivolous trips around the Sun, or placing communications satellites in orbit. Its destination is deep space.

Originally, SpaceX planned to use the Falcon Heavy to send people to Mars in a Dragon capsule. They’ve cancelled that idea, but the Heavy still has the capability to send rovers or other cargo to Mars and beyond. Who knows what uses it will be put to, once it has a track record of success.

We’re all eager to see the successful launch of the Falcon heavy, but while we wait for it, we can enjoy this animation from SpaceX.