Last Man on the Moon, Gene Cernan, Has Died

Apollo astronaut Gene Cernan. Credit: NASA.

One of Apollo’s finest, astronaut Gene Cernan, has left Earth for the last time. Cernan, the last man to walk on the Moon, died Monday, January 16, 2017.

“Gene Cernan, Apollo astronaut and the last man to walk on the moon, has passed from our sphere, and we mourn his loss,” said NASA Administrator Charlie Bolden in statement. “Leaving the moon in 1972, Cernan said, ‘As I take these last steps from the surface for some time into the future to come, I’d just like to record that America’s challenge of today has forged man’s destiny of tomorrow.’ Truly, America has lost a patriot and pioneer who helped shape our country’s bold ambitions to do things that humankind had never before achieved.”

In a statement, Cernan’s family said he was humbled by his life experiences, and he recently commented, “I was just a young kid in America growing up with a dream. Today what’s most important to me is my desire to inspire the passion in the hearts and minds of future generations of young men and women to see their own impossible dreams become a reality.”

“Even at the age of 82, Gene was passionate about sharing his desire to see the continued human exploration of space and encouraged our nation’s leaders and young people to not let him remain the last man to walk on the Moon,” the family continued.

A trailer for the film “The Last Man on the Moon:”

Cernan was a Captain in the U.S. Navy but he is remembered most for his historic travels off Earth. He flew in space three times, twice to the Moon.

He was one of 14 astronauts selected by NASA in October 1963. He piloted the Gemini 9 mission with Commander Thomas Stafford on a three-day flight in June 1966. Cernan was the second American to conduct a spacewalk, and he logged more than two hours outside the Earth-orbiting Gemini capsule.

During his two hour, eight minute spacewalk on June 5, 1966, Gemini IXA pilot Eugene Cernan is seen outside the spacecraft. Credit: NASA/Tom Stafford.

In May 1969, he was the lunar module pilot of Apollo 10, and dramatically descended to within 5 km (50,000 ft) of the Moon’s surface to test out the lunar lander’s capabilities, paving the way for Apollo 11’s first lunar landing two months later.

As Cernan flew the lunar module close to the surface, he radioed back to Earth, “I’m telling you, we are low. We’re close baby! … We is down among ‘em!”

Apollo 17 Mission Commander Eugene A. Cernan during the second spacewalk on December 12, 1972, standing near the lunar rover. Credit: NASA.

But his ultimate mission was landing on the Moon and walking across its surface during the Apollo 17 mission, the sixth and final mission to land on the Moon. During three EVAs to conduct surface operations within the Taurus-Littrow landing site, Cernan and his crewmate Harrison “Jack” Schmitt collected samples of the lunar surface and deployed scientific instruments.

On December 14, 1972, Cernan returned to the lunar module Challenger after the end of the third moonwalk, officially becoming the last human to set foot upon Moon.

Nobody can take those footsteps I made on the surface of the moon away from me.” – Eugene Cernan

Bolden said that in his last conversation with Cernan, “he spoke of his lingering desire to inspire the youth of our nation to undertake the STEM (science, technology, engineering and mathematics) studies, and to dare to dream and explore. He was one of a kind and all of us in the NASA Family will miss him greatly.”

The words of Cernan as he left the Moon’s surface bring us hope, for one day embarking on human missions of exploration of space once more.

“We shall return, in peace and hope, for all mankind.” – Gene Cernan.

A portion of a poem by space poet Stuart Atkinson is a wonderful remembrance:

Another One Falls

No mournful blare of trumpets but a forlorn Tweet announced
Another one had gone;
Another of the tallest redwoods in the forest of history
Had fallen, leaving a poorer world behind.

One by one they pass – the giants who dared to step
Off Terra, fly through a quarter million miles of deadly night
And stride across the Moon. On huge TVs in living rooms and schools
We watched them bounce across its ancient plains,
Snowmen stained by dust as cold and grey
As crematorium ash, mischievous boys with smiles flashing
Behind visors of burnished gold as they lolloped along,
Hopping like drunk kangaroos between boulders
Big as cars, so, so far away from Earth that their words
Came from the past –

And another one has gone.

(Read the full poem here.)

Apollo 17 mission commander Gene Cernan, the last man to walk on the moon, looks skyward during a memorial service celebrating the life of Neil Armstrong in 2012. Credit: NASA/Bill Ingalls

The Incredible Story of How the Huygens Mission to Titan Succeeded When It Could Have Failed

Artist depiction of Huygens landing on Titan. Credit: ESA

Twelve years ago today, the Huygens probe landed on Titan, marking the farthest point from Earth any spacecraft has ever landed. While a twelfth anniversary may be an odd number to mark with a special article, as we said in our previous article (with footage from the landing), this is the last opportunity to celebrate the success of Huygens before its partner spacecraft Cassini ends its mission on September 15, 2017 with a fateful plunge into Saturn’s atmosphere.

But Huygens is also worth celebrating because, amazingly, the mission almost failed, but yet was a marvelous success. If not for the insistence of one ESA engineer to complete an in-flight test of Huygens’ radio system, none of the spacecraft’s incredible data from Saturn’s largest and mysterious moon would have ever been received, and likely, no one would have ever known why.

The first-ever images of the surface of a new moon or planet are always exciting. The Huygens probe was launched from Cassini to the surface of Titan, but was not able investigate the lakes and seas on the surface. Image Credit: ESA/NASA/JPL/University of Arizona
The first-ever images of the surface Titan, taken by the Huygens probe. Image Credit: ESA/NASA/JPL/University of Arizona

As I detail in my new book “Incredible Stories From Space: A Behind-the-Scenes-Look at the Missions Changing Our View of the Cosmos,” in 1999, the Cassini orbiter and the piggybacking Huygens lander were on their way to the Saturn system. The duo launched in 1997, but instead of making a beeline for the 6th planet from the Sun, they took a looping path called the VVEJGA trajectory (Venus-Venus-Earth-Jupiter Gravity Assist), swinging around Venus twice and flying past Earth 2 years later.

While all the flybys gave the spacecraft added boosts to help get it to Saturn, the Earth flyby also provided a chance for the teams to test out various systems and instruments and get immediate feedback.

“The European group wanted to test the Huygens receiver by transmitting the data from Earth,” said Earl Maize, Project Manager for the Cassini mission at JPL, who I interviewed for the book. “That’s a great in-flight test, because there’s the old adage of flight engineers, ‘test as you fly, fly as you test.’”

The way the Huygens mission would work at the Saturn system was that Cassini would release Huygens when the duo approached Titan. Huygens would drop through Titan’s thick and obscuring atmosphere like a skydiver on a parachute, transmitting data all the while. The Huygens probe didn’t have enough power or a large enough dish to transmit all its data directly to Earth, so Cassini would gather and store Huygens’ data on board and later transmit it to Earth.

Boris Smeds was head of ESOC’s Systems and Requirements Section, Darmstadt, Germany. Credit: ESA.

ESA engineer Boris Smeds wanted to ensure this data handoff was going to work, otherwise a crucial part of the mission would be lost. So he proposed a test during the 1999 Earth flyby.

Maize said that for some reason, there was quite a bit of opposition to the test from some of the ESA officials, but Smeds and Claudio Sollazzo, Huygens’s ground operations manager at ESA’s European Space Operation Centre (ESOC) in Darmstadt, Germany were insistent the test was necessary.

NASA's Deep Space Network is responsible for communicating with Juno as it explores Jupiter. Pictured is the Goldstone facility in California, one of three facilities that make up the Network. Image: NASA/JPL
NASA’s Deep Space Network is responsible for communicating with spacecraft. Pictured is the Goldstone facility in California, one of three facilities that make up the Network. Image: NASA/JPL

“They were not to be denied,” Maize said, “so they eventually got permission for the test. The Cassini team organized it, going to the Goldstone tracking station [in California] of the Deep Space Network (DSN) and did what’s called a ‘suitcase test,’ broke into the signal, and during the Earth flyby, Huygens, Cassini and Goldstone were all programmed to simulate the probe descending to Titan. It all worked great.”

Except for one thing: Cassini received almost no simulated data, and what it did receive was garbled. No one could figure out why.

Six months of painstaking investigation finally identified the problem. The variation in speed between the two spacecraft hadn’t been properly compensated for, causing a communication problem. It was as if the spacecraft were each communicating on a different frequency.

Artist concept of the Huygens probe descending to Titan. Credit: ESA.

“The European team came to us and said we didn’t have a mission,” Maize said. “But we put together ‘Tiger Teams’ to try and figure it out.”

The short answer was that the idiosyncrasies in the communications system were hardwired in. With the spacecraft now millions of miles away, nothing could be fixed. But engineers came up with an ingenious solution using a basic principal known as the Doppler Effect.

The metaphor Maize likes to use is this: if you are sitting on the shore and a speed boat goes by close to the coast, it zooms past you quickly. But that same boat going the same speed out on the horizon looks like it is barely moving.
“Since we couldn’t change Huygens’ signal, the only thing we could change was the way Cassini flew,” Maize said. “If we could move Cassini farther away and make it appear as if Huygens was moving slower, it would receive lander’s radio waves at a lower frequency, solving the problem.”

Maize said it took two years of “fancy coding modifications and some pretty amazing trajectory computations.” Huygens’ landing was also delayed two months for the new trajectory that was needed overcome the radio system design flaw.

Additionally, with Cassini needing to be farther away from Huygens than originally planned, it would eventually fly out of range to capture all of Huygens’ data. Astronomers instigated a plan where radio telescopes around the world would listen for Huygens’ faint signals and capture anything Cassini missed.

Huygens was released from the Cassini spacecraft on Christmas Day 2004, and arrived at Titan on January 14, 2005. The probe began transmitting data to Cassini four minutes into its descent through Titan’s murky atmosphere, snapping photos and taking data all the while. Then it touched down, the first time a probe had landed on an extraterrestrial world in the outer Solar System.

Because of the communication problem, Huygens was not able to gather as much information as originally planned, as it could only transmit on one channel instead of two. But amazingly, Cassini captured absolutely all the data sent by Huygens until it flew out of range.

“It was beautiful,” Maize said, “I’ll never forget it. We got it all, and it was a wonderful example of international cooperation. The fact that 19 countries could get everything coordinated and launched in the first place was pretty amazing, but there’s nothing that compares to the worldwide effort we put into recovering the Huygens mission. From an engineering standpoint, that might trump everything else we’ve done on this mission.”

The view of Titan from the descending Huygens spacecraft on January 14, 2005. Credit: ESA/NASA/JPL/University of Arizona.

With its ground-breaking mission, Huygens provided the first real view of the surface of Titan. The data has been invaluable for understanding this unique and mysterious moon, showing geological and meteorological processes that are more similar to those on the surface of the Earth than anywhere else in the Solar System. ESA has details on the top discoveries by Huygens here.

Noted space journalist Jim Oberg has written several detailed and very interesting articles about the Huygens’ recovery, including one at IEEE Spectrum and another at The Space Review. These articles provide much more insight into the test, Smeds’ remarkable insistence for the test, the recovery work that was done and the subsequent success of the mission.

As Oberg says in IEEE Spectrum, “Smeds continued a glorious engineering tradition of rescuing deep-space missions from doom with sheer persistence, insight, and lots of improvisation.”

A modest Smeds was quoted by ESA: “This has happened before. Almost any mission has some design problem,” says Smeds, who says he’s worked on recovering from pre- and post-launch telecom issues that have arisen with several past missions. “To me, it’s just part of my normal work.”

For more stories about Huygens, Cassini and several other current robotic space missions, “Incredible Stories From Space” tells many behind-the-scenes stories from the amazing people who work on these missions.

What Will the Voyager Spacecraft Encounter Next? Hubble Helps Provide a Roadmap

An artist's concept of Voyager 1's view of the Solar System. Voyager 1 is one of our first interstellar probes, though it's an inadvertent one. It has no particular destination. Credit: NASA, ESA, and J. Zachary and S. Redfield (Wesleyan University); Artist's Illustration Credit: NASA, ESA, and G. Bacon (STScI).

The twin Voyager spacecraft are now making their way through the interstellar medium. Even though they are going where none have gone before, the path ahead it is not completely unknown.

Astronomers are using the Hubble Space Telescope to observe the ‘road’ ahead for these pioneering spacecraft, to ascertain what various materials may lay along the Voyagers’ paths through space.

Combining Hubble data with the information the Voyagers are able to gather and send back to Earth, astronomers said a preliminary analysis reveals “a rich, complex interstellar ecology, containing multiple clouds of hydrogen laced with other elements.”

“This is a great opportunity to compare data from in situ measurements of the space environment by the Voyager spacecraft and telescopic measurements by Hubble,” said Seth Redfield of Wesleyan University, who led the study. “The Voyagers are sampling tiny regions as they plow through space at roughly 38,000 miles per hour. But we have no idea if these small areas are typical or rare. The Hubble observations give us a broader view because the telescope is looking along a longer and wider path. So Hubble gives context to what each Voyager is passing through.”

The combined data is also providing new insights into how our Sun travels through interstellar space, and astronomers hope that these combined observations will help them characterize the physical properties of the local interstellar medium.

“Ideally, synthesizing these insights with in situ measurements from Voyager would provide an unprecedented overview of the local interstellar environment,” said Hubble team member Julia Zachary of Wesleyan University.

The initial look at the clouds’ composition shows very small variations in the abundances of the chemical elements contained in the structures.

“These variations could mean the clouds formed in different ways, or from different areas, and then came together,” Redfield said.

In this illustration, NASA’s Hubble Space Telescope is looking along the paths of NASA’s Voyager 1 and 2 spacecraft as they journey through the solar system and into interstellar space. Hubble is gazing at two sight lines (the twin cone-shaped features) along each spacecraft’s path. The telescope’s goal is to help astronomers map interstellar structure along each spacecraft’s star-bound route. Each sight line stretches several light-years to nearby stars. Credit: NASA, ESA, and Z. Levy (STScI).

Astronomers are also seeing that the region that we and our solar system are passing through right now contains “clumpier” material, which may affect the heliosphere, the large bubble that is produced by our Sun’s powerful solar wind. At its boundary, called the heliopause, the solar wind pushes outward against the interstellar medium. Hubble and Voyager 1 made measurements of the interstellar environment beyond this boundary, where the wind comes from stars other than our sun.

“I’m really intrigued by the interaction between stars and the interstellar environment,” Redfield said. “These kinds of interactions are happening around most stars, and it is a dynamic process.”

Both Voyagers 1 and 2 launched in 1977 and both explored Jupiter and Saturn. Voyager 2 went on to visit Uranus and Neptune.

Voyager 1 is now 13 billion miles (20 billion km) from Earth, and entered interstellar space in 2012, the region between the stars that is filled with gas, dust, and material recycled from dying stars. It is the farthest a human-made spacecraft has even traveled. Next big ‘landmark’ for Voyager 2 is in about 40,000 years when it will come within 1.6 light-years of the star Gliese 445, in the constellation Camelopardalis.

Voyager 2, is 10.5 billion miles (16.9 billion km) from Earth, and will pass 1.7 light-years from the star Ross 248 in about 40,000 years.

Of course, neither spacecraft will be operational by then.

But scientists hope that for at least the next 10 years, the Voyagers will be making measurements of interstellar material, magnetic fields, and cosmic rays along their trajectories. The complimentary Hubble observations will help to map interstellar structure along the routes. Each sight line stretches several light-years to nearby stars. Sampling the light from those stars, Hubble’s Space Telescope Imaging Spectrograph measured how interstellar material absorbed some of the starlight, leaving telltale spectral fingerprints.

When the Voyagers run out of power and are no longer able to communicate with Earth, astronomers still hope to use observations from Hubble and subsequent space telescopes to characterize the environment where our robotic emissaries to the cosmos will travel.

Source: HubbleSite

Land On Titan With Huygens in Beautiful New Video

The view of Titan from the descending Huygens spacecraft on January 14, 2005. Credit: ESA/NASA/JPL/University of Arizona.

On December 25, 2004, the piggybacking Huygens probe was released from the ‘mothership’ Cassini spacecraft and it arrived at Titan on January 14, 2005. The probe began transmitting data to Cassini four minutes into its descent through Titan’s murky atmosphere, snapping photos and taking data all the while. Then it touched down, the first time a probe had landed on an extraterrestrial world in the outer Solar System.

JPL has released a re-mix of the data and images gathered by Huygens 12 years ago in a beautiful new video. This is the last opportunity to celebrate the success of Huygens before Cassini ends its mission in September of 2017.

Watch as the incredible view of Titan’s surface comes into view, with mountains, a system of river channels and a possible lakebed.

After a two-and-a-half-hour descent, the metallic, saucer-shaped spacecraft came to rest with a thud on a dark floodplain covered in cobbles of water ice, in temperatures hundreds of degrees below freezing.

Huygens had to quickly collect and transmit all the images and data it could because shortly after landing, Cassini would drop below the local horizon, “cutting off its link to the home world and silencing its voice forever.”

How much of this video is actual images and data vs computer graphics?

Of course, the clips at the beginning and end of the video are obviously animations of the probe and orbiter. However, the slow descending 1st-person point-of-view video is made using actual images from Huygens. But Huygens did not take a continuous movie sequence, so a lot of work was done by the team that operated Huygens’ optical imager, the Descent Imager/Spectral Radiometer (DISR), to enhance, colorize, and re-project the images into a variety of formats.

The view of the cobblestones and the parachute shadow near the end of the video is also created from real landing data, but was made in a different way from the rest of the descent video, because Huygens’ cameras did not actually image the parachute shadow. However, the upward looking infrared spectrometer took a measurement of the sky every couple of seconds, recording a darkening and then brightening to the unobstructed sky. The DISR team calculated from this the accurate speed and direction of the parachute, and of its shadow to create a very realistic video based on the data.

If you’re a data geek, there are some great videos of Huygens’ data by the University of Arizona Lunar and Planetary Laboratory team, such as this one:

The movie shows the operation of the DISR camera during the descent onto Titan. The almost 4-hour long operation
of DISR is shown in less than five minutes in 40 times actual sped up to landing and 100 times actual speed thereafter.

Erich Karkoschka from the UA team explained what all the sounds in the video are. “All parts of DISR worked together as programmed, creating a harmony,” he said. Here’s the full explanation:

Sound was added to mark various events. The left speaker follows the motion of Huygens. The pitch of the tone indicates the rotational speed. Vibrato indicates vibration of the parachute. Little clicks indicate the clocking of the rotation counter. Noise corresponds to heating of the heat shield, to parachute deployments, to the heat shield release, to the jettison of the DISR cover, and to touch down.

The sound in the right speaker follows DISR data. The pitch of the continuous tone goes with the signal strength. The 13 different chime tones indicate activity of the 13 components of DISR. The counters at the top and bottom of the list get the high and low notes, respectively.

You can see more info and videos created from Huygens’ data here.

Read some reminiscences about Huygens from some of the Cassini team here.

Confirmed: We Really are ‘Star Stuff’

An artists depiction of how the spectra of elements in the stars of the Milky Way reflect the importance these elements play in human life. Credit: Dana Berry/SkyWorks Digital Inc.; SDSS collaboration.

Scientist Carl Sagan said many times that “we are star stuff,” from the nitrogen in our DNA, the calcium in our teeth, and the iron in our blood.

It is well known that most of the essential elements of life are truly made in the stars. Called the “CHNOPS elements” – carbon, hydrogen, nitrogen, oxygen, phosphorous, and sulfur – these are the building blocks of all life on Earth. Astronomers have now measured of all of the CHNOPS elements in 150,000 stars across the Milky Way, the first time such a large number of stars have been analyzed for these elements.

“For the first time, we can now study the distribution of elements across our Galaxy,” says Sten Hasselquist of New Mexico State University. “The elements we measure include the atoms that make up 97% of the mass of the human body.”

Astronomers with the Sloan Digital Sky Survey made their observations with the APOGEE (Apache Point Observatory Galactic Evolution Experiment) spectrograph on the 2.5m Sloan Foundation Telescope at Apache Point Observatory in New Mexico. This instrument looks in the near-infrared to reveal signatures of different elements in the atmospheres of stars.

Quote from Carl Sage. Credit: Pinterest

While the observations were used to create a new catalog that is helping astronomers gain a new understanding of the history and structure of our galaxy, the findings also “demonstrates a clear human connection to the skies,” said the team.

While humans are 65% oxygen by mass, oxygen makes up less than 1% of the mass of all of elements in space. Stars are mostly hydrogen, but small amounts of heavier elements such as oxygen can be detected in the spectra of stars. With these new results, APOGEE has found more of these heavier elements in the inner part of the galaxy. Stars in the inner galaxy are also older, so this means more of the elements of life were synthesized earlier in the inner parts of the galaxy than in the outer parts.

So what does that mean for those of us out on the outer edges of one of the Milky Way’s spiral arms, about 25,000 light-years from the center of the galaxy?

“I think it’s hard to say what the specific implications are for when life could arise,” said team member Jon Holtzman, also from New Mexico State, in an email to Universe Today. “We measure typical abundance of CHNOPS elements at different locations, but it’s not so easy to determine at any given location the time history of the CHNOPS abundances, because it’s hard to measure ages of stars. On top of that, we don’t know what the minimum amount of CHNOPS would need to be for life to arise, especially since we don’t really know how that happens in any detail!”

Holtzman added it is likely that, if there is a minimum required abundance, that minimum was probably reached earlier in the inner parts of the Galaxy than where we are.

The team also said that while it’s fun to speculate how the composition of the inner Milky Way Galaxy might impact how life might arise, the SDSS scientists are much better at understanding the formation of stars in our Galaxy.

“These data will be useful to make progress on understanding Galactic evolution,” said team member Jon Bird of Vanderbilt University, “as more and more detailed simulations of the formation of our galaxy are being made, requiring more complex data for comparison.”

Sloan Foundation 2.5m Telescope at Apache Point Observatory. Credit: SDSS.

“It’s a great human interest story that we are now able to map the abundance of all of the major elements found in the human body across hundreds of thousands of stars in our Milky Way,” said Jennifer Johnson of The Ohio State University. “This allows us to place constraints on when and where in our galaxy life had the required elements to evolve, a sort ‘temporal Galactic habitable zone’”.

The catalog is available at the SDSS website, so take a look for yourself at the chemical abundances in our portion of the galaxy.

Source: SDSS

Martian Spacecraft Spies Earth and the Moon

A view of Earth and its Moon, as seen from Mars. It combines two images acquired on Nov. 20, 2016, by the HiRISE camera on NASA's Mars Reconnaissance Orbiter, with brightness adjusted separately for Earth and the moon to show details on both bodies. Credit: NASA/JPL-Caltech/Univ. of Arizona.

The incredible HiRISE camera on board the Mars Reconnaissance Orbiter turned its eyes away from its usual target – Mars’ surface – and for calibration purposes only, took some amazing images of Earth and our Moon. Combined to create one image, this is a marvelous view of our home from about 127 million miles (205 million kilometers) away.

Alfred McEwen, principal investigator for HiRISE said the image is constructed from the best photo of Earth and the best photo of the Moon from four sets of images. Interestingly, this combined view retains the correct positions and sizes of the two bodies relative to each other. However, Earth and the Moon appear closer than they actually are in this image because the observation was planned for a time at which the Moon was almost directly behind Earth, from Mars’ point of view, to see the Earth-facing side of the Moon.

A view of Earth and its Moon, as seen from Mars. It combines two images acquired on Nov. 20, 2016, by the HiRISE camera on NASA’s Mars Reconnaissance Orbiter, with brightness adjusted separately for Earth and the moon to show details on both bodies. Credit: NASA/JPL-Caltech/Univ. of Arizona.

“Each is separately processed prior to combining (in correct relative positions and sizes), so that the Moon is bright enough to see,” McEwen wrote on the HiRISE website. “The Moon is much darker than Earth and would barely show up at all if shown at the same brightness scale as Earth. Because of this brightness difference, the Earth images are saturated in the best Moon images, and the Moon is very faint in the best (unsaturated) Earth image.”

Earth looks reddish because the HiRISE imaging team used color filters similar to the Landsat images where vegetation appears red.

“The image color bandpasses are infrared, red, and blue-green, displayed as red, green, and blue, respectively,” McEwen explained. “The reddish blob in the middle of the Earth image is Australia, with southeast Asia forming the reddish area (vegetation) near the top; Antarctica is the bright blob at bottom-left. Other bright areas are clouds. We see the western near-side of the Moon.”

HiRISE took these pictures on Nov. 20, 2016, and this is not the first time HiRISE has turned its eyes towards Earth.
Back in 2007, HiRISE took this image, below, from Mars’ orbit when it was just 88 million miles (142 million km) from Earth. This one is more like how future astronauts might see Earth and the Moon through a telescope from Mars’ orbit.

An image of Earth and the Moon, acquired on October 3, 2007, by the HiRISE camera on NASA’s Mars Reconnaissance Orbiter. Credit:
NASA/JPL-Caltech/University of Arizona.

If you look closely, you can make out a few features on our planet. The west coast outline of South America is at lower right on Earth, although the clouds are the dominant features. In fact, the clouds were so bright, compared with the Moon, that they almost completely saturated the filters on the HiRISE camera. The people working on HiRISE say this image required a fair amount of processing to make such a nice-looking picture.

You can see an image from a previous Mars’ orbiter, the Mars Global Surveyor, that took a picture of Earth, the Moon and Jupiter — all in one shot — back in 2003 here.

See this JPL page for high resolution versions of the most recent Earth/Moon image.

Source of Mysterious ‘Fast’ Radio Signals Pinpointed, But What Is It?

Gemini composite image of the field around FRB 121102, the only repeating FRB discovered so far. Credit: Gemini Observatory/AURA/NSF/NRC.

For about 10 years, radio astronomers have been detecting mysterious milliseconds-long blasts of radio waves, called “fast radio bursts” (FRB).

While only 18 of these events have been detected so far, one FRB has been particularly intriguing as the signal has been sporadically repeating. First detected in November 2012, astronomers didn’t know if FRB 121102 originated from within the Milky Way galaxy or from across the Universe.

A concentrated search by multiple observatories around the world has now determined that the signals are coming from a dim dwarf galaxy about 2.5 billion light years from Earth. But astronomers are still uncertain about exactly what is creating these bursts.

“These radio flashes must have enormous amounts of energy to be visible from that distance,” said Shami Chatterjee from Cornell University, speaking at a press briefing at the American Astronomical Society meeting this week. Chatterjee and his colleagues have papers published today in Nature and Astrophysical Journal Letters.

The globally distributed dishes of the European VLBI Network are linked with each other and the 305-m William E. Gordon Telescope at the Arecibo Observatory in Puerto Rico. Credit:?Danielle?Futselaar.

The patch of the sky where the signal originated is in the constellation Auriga, and Chatterjee said the patch of the sky is arc minutes in diameter. “In that patch are hundreds of sources. Lots of stars, lots of galaxies, lots of stuff,” he said, which made the search difficult.

The Arecibo radio telescope, the observatory that originally detected the event, has a resolution of three arc minutes or about one-tenth of the moon’s diameter, so that was not precise enough to identify the source. Astronomers used the Very Large Array in New Mexico and the European Very Large Baseline Interferometer (VLBI) network, to help narrow the origin. But, said co-author Casey Law from the University of California Berkeley, that also created a lot of data to sort through.

“It was like trying to find a needle in a terabyte haystack,” he said. “It took a lot of algorithmic work to find it.”

Finally on August 23, 2016, the burst made itself extremely apparent with nine extremely bright bursts.

“We had struggled to be able to observe the faintest bursts we could,” Law said, “but suddenly here were nine of the brightest ones ever detected. This FRB was generous to us.”

The team was not only able to pinpoint it to the distant dwarf galaxy, co-author Jason Hessels from ASTRON/University of Amsterdam said they were also able to determine the bursts didn’t come from the center of the galaxy, but came from slightly off-center in the galaxy. That might indicate it didn’t originate from a central black hole. Upcoming observations with the Hubble Space Telescope might be able to pinpoint it even further.

Gemini composite image of the field around FRB 121102 (indicated). The dwarf host galaxy was imaged, and spectroscopy performed, using the Gemini Multi-Object Spectrograph (GMOS) on the Gemini North telescope on Maunakea in Hawai’i. Data was obtained on October 24-25 and November 2, 2016. Credit: Gemini Observatory/AURA/NSF/NRC.

What makes this source burst repeatedly?

“We don’t know yet what caused it or the physical mechanism that makes such bright and fast pulses,” said said Sarah Burke-Spolaor, from West Virginia University. “The FRB could be outflow from an active galactic nuclei (AGN) or it might be more familiar, such as a distant supernova remnant, or a neutron star.”

Burke-Spolaor added that they don’t know yet if all FRBs are created equal, as so far FRB 121102 is the only repeater. The team hopes there will be other examples detected.

“It may be a magnetar – a newborn neutron star with a huge magnetic field, inside a supernova remnant or a pulsar wind nebula – somehow producing these prodigious pulses,” said Chatterjee. “Or, it may be a combination of all these ideas – explaining why what we’re seeing may be somewhat rare.”

For additional reading:
Gemini Observatory
Berkeley
Nature
Nature News

NASA’s Favorite Photos of 2016

The Soyuz MS-01 spacecraft launches from the Baikonur Cosmodrome on July 7, 2016 bringing a new crew to the International Space Station. Credit: (NASA/Bill Ingalls)

There are a group of unsung heroes at NASA, the people who travel the world to capture key events in our exploration of space. They share their images with all of us, but most of the time, it’s not just the pictures of launches, landings, and crucial mission events that they capture. They also show us behind-the-scenes events that otherwise might go unnoticed, and they also capture the true personalities of the people behind the missions and events.

From exciting beginnings of rocket launches and rocket tests to the sad losses of space exploration icons, these photographers are there take these images that will forever remind us of the glories and perils of spaceflight and the joys and sadness of human life.

NASA photographers Bill Ingalls, Aubrey Gemignani, Joel Kowsky, Connie Moore, and Gwen Pitman chose some of their favorites images from 2016, and below are just a few. As Ingalls told us, “These are the favorite images created by our HQ photo team, not from the entire agency. There are many more talented photographers at the NASA centers producing some amazing work as well.”

In this 30 second exposure taken with a circular fish-eye lens, a meteor streaks across the sky during the annual Perseid meteor shower as a photographer wipes moisture from the camera lenses Friday, August 12, 2016 in Spruce Knob, West Virginia. Photo Credit: (NASA/Bill Ingalls)
The team from the Juno mission celebrate after they received confirmation from the spacecraft that it had successfully completed the engine burn and entered orbit of Jupiter on July 4, 2016 in mission control of the Space Flight Operations Facility at the Jet Propulsion Laboratory in Pasadena, CA. Juno will orbit the planet for 20 months to collect data on the planetary core, map the magnetic field, and measure the amount of water and ammonia in the atmosphere. Credit: (NASA/Aubrey Gemignani)
The United Launch Alliance Atlas V rocket carrying NASA’s Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) spacecraft lifts off on from Space Launch Complex 41 on Sept. 8, 2016 at Cape Canaveral Air Force Station in Florida. OSIRIS-REx will be the first U.S. mission to sample an asteroid, retrieve at least two ounces of surface material and return it to Earth for study. The asteroid, Bennu, may hold clues to the origin of the solar system and the source of water and organic molecules found on Earth. Photo Credit: (NASA/Joel Kowsky)
Annie Glenn, Widow of former astronaut and Senator John Glenn, pays her respects to her late husband as he lies in repose, under a United States Marine honor guard, in the Rotunda of the Ohio Statehouse in Columbus, Friday, Dec. 16, 2016. Credit: (NASA/Bill Ingalls)
Piers Sellers, former astronaut and deputy director of the Sciences and Exploration Directorate at NASA’s Goddard Space Flight Center, speaks at NASA’s Earth Day event, Friday, April 22, 2016 at Union Station in Washington, DC. Sadly, Sellers passed away on Dec. 23, after battling cancer. Credit: (NASA/Joel Kowsky)
The Soyuz TMA-20M spacecraft is seen as it lands with Expedition 48 crew members NASA astronaut Jeff Williams, Russian cosmonauts Alexey Ovchinin, and Oleg Skripochka of Roscosmos near the town of Zhezkazgan, Kazakhstan on Wednesday, Sept. 7, 2016. Credit: (NASA/Bill Ingalls)
Following his year in space on board the International Space Station, astronaut Scott Kelly spoke during an event at the United States Capitol Visitor Center, on May 25, 2016, in Washington. Credit: (NASA/Bill Ingalls)
The second and final qualification motor (QM-2) test for the Space Launch System’s booster is seen, Tuesday, June 28, 2016, at Orbital ATK Propulsion Systems test facilities in Promontory, Utah. During the Space Launch System flight the boosters will provide more than 75 percent of the thrust needed to escape the gravitational pull of the Earth, the first step on NASA’s Journey to Mars. Credit: (NASA/Bill Ingalls)
NASA astronaut Peggy Whitson gets her hair cut on Nov. 14, 2016 at the Cosmonaut Hotel in Baikonur, Kazakhstan, a few days before launching to spend about six months on the International Space Station. Credit: (NASA/Bill Ingalls)

Click on each of the images to see larger versions on Flickr. You can see the entire selection of these favorite photos from 2016 on the NASA HQ Flickr page.

NASA Might Build an Ice House on Mars

Artist concept of the Mars Ice Home. Credit: NASA.

At first glance, a new concept for a NASA habitat on Mars looks like a cross between Mark Watney’s inflatable potato farm from “The Martian” and the home of Luke’s Uncle Owen on Tatooine from “Star Wars.”

The key to the new design relies on something that may or may not be abundant on Mars: underground water or ice.

The “Mars Ice Home” is a large inflatable dome that is surrounded by a shell of water ice. NASA said the design is just one of many potential concepts for creating a sustainable home for future Martian explorers. The idea came from a team at NASA’s Langley Research Center that started with the concept of using resources on Mars to help build a habitat that could effectively protect humans from the elements on the Red Planet’s surface, including high-energy radiation.

The Mars Ice Home concept. Credit: Clouds Architecture Office, NASA Langley Research Center,
Space Exploration Architecture.

Langley senior systems engineer Kevin Vipavetz who facilitated the design session said the team assessed “many crazy, out of the box ideas and finally converged on the current Ice Home design, which provides a sound engineering solution,” he said.

The advantages of the Mars Ice Home is that the shell is lightweight and can be transported and deployed with simple robotics, then filled with water before the crew arrives. The ice will protect astronauts from radiation and will provide a safe place to call home, NASA says. But the structure also serves as a storage tank for water, to be used either by the explorers or it could potentially be converted to rocket fuel for the proposed Mars Ascent Vehicle. Then the structure could be refilled for the next crew.

A cutaway of the interior of the Mars Ice Home concept. Credit: NASA Langley/Clouds AO/SEArch.

Other concepts had astronauts living in caves, or underground, or in dark, heavily shielded habitats. The team said the Ice Home concept balances the need to provide protection from radiation, without the drawbacks of an underground habitat. The design maximizes the thickness of ice above the crew quarters to reduce radiation exposure while also still allowing light to pass through ice and surrounding materials.

Team members of the Ice Home Feasibility Study discuss past and present technology development efforts in inflatable structures at NASA’s Langley Research Center.
Credits: Courtesy of Kevin Kempton/NASA.

“All of the materials we’ve selected are translucent, so some outside daylight can pass through and make it feel like you’re in a home and not a cave,” said Kevin Kempton, also part of the Langley team.

One key constraint is the amount of water that can be reasonably extracted from Mars. Experts who develop systems for extracting resources on Mars indicated that it would be possible to fill the habitat at a rate of one cubic meter, or 35.3 cubic feet, per day. This rate would allow the Ice Home design to be completely filled in 400 days, so the habitat would need to be constructed robotically well before the crew arrives. The design could be scaled up if water could be extracted at higher rates.

The team wanted to also include large areas for workspace so the crew didn’t have to wear a pressure suit to do maintenance tasks such as working on robotic equipment. To manage temperatures inside the Ice Home, a layer of carbon dioxide gas — also available on Mars — would be used as in insulation between the living space and the thick shielding layer of ice.

“The materials that make up the Ice Home will have to withstand many years of use in the harsh Martian environment, including ultraviolet radiation, charged-particle radiation, possibly some atomic oxygen, perchlorates, as well as dust storms – although not as fierce as in the movie ‘The Martian’,” said Langley researcher Sheila Ann Thibeault.

Find out more about the concept here.

Another cutaway of the interior design of the Mars Ice Home concept. Credit: NASA Langley/ Clouds AO/SEArch.

Book Excerpt: “Incredible Stories From Space,” Roving Mars With Curiosity, part 3

This self-portrait of NASA's Curiosity Mars rover shows the vehicle at the "Big Sky" site. Credit: NASA/JPL-Caltech/MSSS

book-cover-image-final-incredible-001
Following is the final excerpt from my new book, “Incredible Stories From Space: A Behind-the-Scenes Look at the Missions Changing Our View of the Cosmos.” The book is an inside look at several current NASA robotic missions, and this excerpt is part 3 of 3 posted here on Universe Today, of Chapter 2, “Roving Mars with Curiosity.” You can read Part 1 here, and Part 2 here. The book is available in print or e-book (Kindle or Nook) Amazon and Barnes & Noble.

How to Drive a Mars Rover

How does Curiosity know where and how to drive across Mars’ surface? You might envision engineers at JPL using joysticks, similar to those used for remote control toys or video games. But unlike RC driving or gaming, the Mars rover drivers don’t have immediate visual inputs or a video screen to see where the rover is going. And just like at the landing, there is always a time delay of when a command is sent to the rover and when it is received on Mars.

“It’s not driving in a real-time interactive sense because of the time lag,” explained John Michael Morookian, who leads the team of rover drivers.

The actual job title of Morookian and his team are ‘Rover Planners,’ which precisely describes what they do. Instead of ‘driving’ the rovers per se; they plan out the route in advance, program specialized software, and upload the instructions to Curiosity.

“We use images taken by the rover of its surroundings,” said Morookian. “We have a set of stereo images from four black-and-white Navigation Cameras, along with images from the Hazcams (hazard avoidance cameras), supported by high-resolution color images from the MastCam that give us details about the nature of the terrain ahead and clues about types of rocks and minerals at the site. This helps identify structures that look interesting to the scientists.”

Using all available data, they can create a three-dimensional visualization of the terrain with specialized software called the Rover Sequencing and Visualization Program (RSVP).

“This is basically a Mars simulator and we put a simulated Curiosity in a panorama of the scene to visualize how the rover could traverse on its path,” Morookian explained. “We can also put on stereo glasses, which allow our eyes to see the scene in three dimensions as if we were there with the rover.

In virtual reality, the rover drivers can manipulate the scene and the rover to test every possibility of which routes are the best and what areas to avoid. There, they can make all the mistakes (get stuck in a dune, tip the rover, crash into a big rock, drive off a precipice) and perfect the driving sequence while the real rover remains safe on Mars.
“The scientists also review the images for features that are interesting and consult with the Rover Planners to help define a path. Then we compose the detailed commands that are necessary to get Curiosity from Point A to Point B along that path,” Morookian said. “”We can also incorporate the commands needed to give the rover direction to make contact with the site using its robotic arm.”

 When Curiosity's Navigation Cameras (Navcams) take black-and-white images and send them back to Earth each day, rover planners combine them with other rover data to create 3D terrain models. By adding a computerized 3D rover model to the terrain model, rover planners can understand better the rover's position, as well as distances to, and scale of, features in the landscape. Credit: NASA/JPL-Caltech.
When Curiosity’s Navigation Cameras (Navcams) take black-and-white images and send them back to Earth each day, rover planners combine them with other rover data to create 3D terrain models. By adding a computerized 3D rover model to the terrain model, rover planners can understand better the rover’s position, as well as distances to, and scale of, features in the landscape. Credit: NASA/JPL-Caltech.

So, every night the rover is commanded to shut down for eight hours to recharge its batteries with the nuclear generator. But first Curiosity sends data to Earth, including pictures of the terrain and any science information. On Earth, the Rover Planners take that data, do their planning work, complete the software programing and beam the information back to Mars. Then Curiosity wakes up, downloads the instructions and sets to work. And the cycle repeats.

Curiosity also has an AutoNav feature which allows the rover to traverse areas the team hasn’t seen yet in images. So, it could go over the hill and down the other side to uncharted territory, with the AutoNav sensing potential hazards.

“We don’t use it too often because it is computationally expensive, meaning it takes much longer for the rover to operate in that mode,” Morookian said. “We often find it’s a better trade to just come in the next day, look at the images and drive as far as we can see.”

A view of the Space Flight Operations Facility at the Jet Propulsion Laboratory, where all the data going both to and from all planetary missions is sent and received via the Deep Space Network. Credit: Nancy Atkinson.
A view of the Space Flight Operations Facility at the Jet Propulsion Laboratory, where all the data going both to and from all planetary missions is sent and received via the Deep Space Network. Credit: Nancy Atkinson.

As Morookian showed me the various rooms used by rover planning teams at JPL, he explained how they need to operate over a number of different timescales.

“We not only have the daily route planning,” he said, “but also do long-range strategic planning using orbital imagery from the HiRISE camera on the Mars Reconnaissance Orbiter and choose paths based on features seen from orbit. Our team works strategically, looking many months out to define the best paths.”

Another process called Supra-Tactical looks out to just the next week. This involves science planners managing and refining the types of activities the rover will be doing in the short term. Also, since no one on the team lives on Mars Time anymore, on Fridays the Rover Planners work out the plans for several days.

“Since we don’t work weekends, Friday plans contain multiple sols of activities,” Morookian said. “Two parallel teams decide which days the rover will drive and which days it will do other activities, such as work with the robotic arm or other instruments.”

The data that comes down from the rover over the weekend is monitored, however, and if there is a problem, a team is called in to do a more detailed assessment. Morookian indicated they’ve had to engage the emergency weekend team several times, but so far there have been no serious problems. “It does keep us on our toes, however,” he said.
The rover features a number of reactive safety checks on the amount of overall tilt of the rover deck and the articulation of the suspension system of the wheels, so if the rover is going over an object that is too large, it will automatically stop.

Curiosity wasn’t built for speed. It was designed to travel up to 660 feet (200 meters) in a day, but it rarely travels that far in a Sol. By early 2016 the rover had driven a total of about 7.5 miles (12 km) across Mars’ surface.

This image shows a close-up of track marks left by the Curiosity rover. Holes in the rover's wheels, seen here in this view, leave imprints in the tracks that can be used to help the rover drive more accurately. The imprint is Morse code for ‘JPL,’ and aids in tracking how far the rover has traveled. Credit: NASA/JPL-Caltech.
This image shows a close-up of track marks left by the Curiosity rover. Holes in the rover’s wheels, seen here in this view, leave imprints in the tracks that can be used to help the rover drive more accurately. The imprint is Morse code for ‘JPL,’ and aids in tracking how far the rover has traveled. Credit: NASA/JPL-Caltech.

There are several ways to determine how far Curiosity has traveled, but the most accurate measurement is called ‘Visual Odometry.’ Curiosity has specialized holes in its wheels in the shape of Morse code letters, spelling out ‘JPL’ – a nod to the home of the rover’s science and engineering teams – across the Martian soil.

“Visual odometry works by comparing the most recent pair of stereo images collected roughly every meter over the drive,” said Morookian. “Individual features in the scene are matched and tracked to provide a measure of how the camera (and thus the rover) has translated and rotated in 3 dimensional space between the two images and it tells us in a very real sense how far Curiosity has gone.”

Careful inspection of the rover tracks can reveal the type of traction the wheels have and if they have slipped, for instance due to high slopes or sandy ground.

Unfortunately, Curiosity now has new holes in its wheels that aren’t supposed to be there.

Rover Problems

Morookian and Project Scientist Ashwin Vasavada both expressed relief and satisfaction that overall — this far into the mission — Curiosity is a fairly healthy rover. The entire science payload is currently operating at nearly full capability. But the engineering team keeps an eye on a few issues.

“Around sol 400, we realized the wheels were wearing faster than we expected,” Vasavada said.

The team operating the Curiosity Mars rover uses the Mars Hand Lens Imager (MAHLI) camera on the rover's arm to check the condition of the wheels at routine intervals. This image of Curiosity's left-middle and left-rear wheels is part of an inspection set taken on April 18, 2016, during the 1,315th sol of the rover's work on Mars. Credit: NASA/JPL-Caltech/MSSS.
The team operating the Curiosity Mars rover uses the Mars Hand Lens Imager (MAHLI) camera on the rover’s arm to check the condition of the wheels at routine intervals. This image of Curiosity’s left-middle and left-rear wheels is part of an inspection set taken on April 18, 2016, during the 1,315th sol of the rover’s work on Mars. Credit: NASA/JPL-Caltech/MSSS.

And the wear didn’t consist of just little holes; the team started to see punctures and nasty tears. Engineers realized the holes were being created by the hard, jagged rocks the rover was driving over during that time.

“We weren’t fully expecting the kind of ‘pointy’ rocks that were doing damage,” Vasavada said. “We also did some testing and saw how one wheel could push another wheel into a rock, making the damage worse. We now drive more carefully and don’t drive as long as we have in the past. We’ve been able to level off the damage to a more acceptable rate.”

Early in the mission, Curiosity’s computer went into ‘safe mode’ several times, as Curiosity’s software recognized a problem, and the response was to disallow further activity and phone home.

Specialized fault protection software runs throughout the modules and instruments, and when a problem occurs, the rover stops and sends data called ‘event records’ to Earth. The records include various categories of urgency, and in early 2015, the rover sent a message that essentially said, “This is very, very bad.” The drill on the rover’s arm had experienced a fluctuation in an electrical current – like a short circuit.

“Curiosity’s software has the ability to detect shorts, like the ground fault circuit interrupter you have in your bathroom,” Morookian explained, “except this one tells you ‘this is very, very bad’ instead of just giving you a yellow light.”

Since the team can’t go to Mars and repair a problem, everything is fixed either by sending software updates to the rover or by changing operational procedures.

Curiosity’s drill in the turret of tools at the end of the robotic arm positioned in contact with the rock surface for the first drilling of the mission on the 170th sol of Curiosity's work on Mars (Jan. 27, 2013) in Yellowknife Bay. The picture was taken by the front Hazard-Avoidance Camera (Hazcam). Image credit: NASA/JPL-Caltech.
Curiosity’s drill in the turret of tools at the end of the robotic arm positioned in contact with the rock surface for the first drilling of the mission on the 170th sol of Curiosity’s work on Mars (Jan. 27, 2013) in Yellowknife Bay. The picture was taken by the front Hazard-Avoidance Camera (Hazcam). Image credit: NASA/JPL-Caltech.

“We are just more careful now with how we use the drill,” Vasavada said, “and don’t drill with full force at the beginning, but slowly ramp up. It’s sort of like how we drive now, more gingerly but it still gets the job done. It hasn’t been a huge impact as of yet.”

A lighter touch on the drill also was necessary for the softer mudstones and sandstones the rover encountered. Morookian said there was concern the layered rocks might not hold up under the assault of the standard drilling protocol, and so they adjusted the technique to use the lowest ‘settings’ that still allows the drill to make sufficient progress into the rock.

But opportunities to use the drill are increasing as Curiosity begins its traverse up the mountain. The rover is traveling through what Vasavada calls a “target rich, very interesting area,” as the science team works to tie together the geological context of everything they are seeing in the images.

Finding Balance on Mars

While the diversion at Yellowknife Bay allowed the team to make some major discoveries, they felt pressure to get to Mt. Sharp, so “drove like hell for a year,” Vasavada said.

Now on the mountain, there is still the pressure to make the most of the mission, with the goal of making it through at least four different rock units – or layers — on Mt. Sharp. Each layer could be like a chapter in the book of Mars’ history.

 A portion of a panorama from Curiosity’s Mastcam shows the rugged surface of ‘Naukluft Plateau’ plus part of the rim of Gale Crater, taken on April 4, 2016 or Sol 1301. Credit: NASA/JPL-Caltech/MSSS
A portion of a panorama from Curiosity’s Mastcam shows the rugged surface of ‘Naukluft Plateau’ plus part of the rim of Gale Crater, taken on April 4, 2016 or Sol 1301. Credit: NASA/JPL-Caltech/MSSS

“Exploring Mt. Sharp is fascinating,” Vasavada said, “and we’re trying to maintain a mix between really great discoveries, which – you hate to say — slows us down, and getting higher on the mountain. Looking closely at a rock in front of you means you’ll never be able to go over and look at that other interesting rock over there.”

Vasavada and Morookian both said it’s a challenge to preserve that balance every day — to find what’s called the ‘knee in the curve’ or ‘sweet spot’ of the perfect optimization between driving and stopping for science.

Then there’s the balance between stopping to do a full observation with all the instruments and doing ‘flyby science’ where less intense observations are made.

“We take the observations we can, and generate all the hypotheses we can in real time,” Vasavada said. “Even if we’re left with 100 open questions, we know we can answer the questions later as long as we know we’ve taken enough data.”

Curiosity’s primary target is not the summit, but instead a region about 1,330 feet (400 meters) up where geologists expect to find the boundary between rocks that saw a lot of water in their history, and those that didn’t. That boundary will provide insight into Mars’ transition from a wet planet to dry, filling in a key gap in the understanding of the planet’s history.

he Curiosity rover recorded this view of the Sun setting at the close of the mission's 956th sol (April 15, 2015), from the rover's location in Gale Crater. This was the first sunset observed in color by Curiosity. The image comes from the left-eye camera of the rover's Mast Camera (Mastcam). Credit: NASA/JPL-Caltech/MSSS/Texas A&M University.
he Curiosity rover recorded this view of the Sun setting at the close of the mission’s 956th sol (April 15, 2015), from the rover’s location in Gale Crater. This was the first sunset observed in color by Curiosity. The image comes from the left-eye camera of the rover’s Mast Camera (Mastcam). Credit: NASA/JPL-Caltech/MSSS/Texas A&M University.

No one really knows how long Curiosity will last, or if it will surprise everyone like its predecessors Spirit and Opportunity. Having made it past the ‘prime mission’ of one year on Mars (two Earth years), and now in the extended mission, the one big variable is the RTG power source. While the available power will start to steadily decrease, both Vasavada and Morookian don’t expect that to be in an issue for at least four more Earth years, and with the right “nurturing,” power could last for a dozen years or more.

But they also know there’s no way to predict how long Curiosity will go, or what unexpected event might end the mission.

The Beast

Does Curiosity have a personality like the previous Mars rovers?

“Actually no, we don’t seem to anthropomorphize this rover like people did with Spirit and Opportunity,” Vasavada said. “We haven’t bonded emotionally with it. Sociologists have actually been studying this.” He shook his head with an amused smile.

Vasavada indicated it might have something to do with Curiosity’s size.

“I think of it as a giant beast,” he said straight-faced. “But not in a mean way at all.”

Curiosity appears to be photobombing Mount Sharp in this selfie image, a mosaic created from several MAHLI images. Credit: NASA/JPL-Caltech/MSSS/Edited by Jason Major.
Curiosity appears to be photobombing Mount Sharp in this selfie image, a mosaic created from several MAHLI images. Credit: NASA/JPL-Caltech/MSSS/Edited by Jason Major.

What has come to come to characterize this mission, Vasavada said, is the complexity of it, in every dimension: the human component of getting 500 people to work and cooperate together while optimizing everyone’s talents; keeping the rover safe and healthy; and keeping ten instruments going every day, which are sometimes doing completely unrelated science tasks.

“Every day is our own little ‘seven minutes of terror,’ where so many things have to go right every single day,” Vasavada said. “There are a million potential issues and interactions, and you have to constantly be thinking about all the ways things can go wrong, because there are a million ways you can mess up. It’s an intricate dance, but fortunately we have a great team.”

Then he added with a smile, “This mission is exciting though, even if it’s a beast.”

“Incredible Stories From Space: A Behind-the-Scenes Look at the Missions Changing Our View of the Cosmos” is published by Page Street Publishing, a subsidiary of Macmillan.

Author Nancy Atkinson at JPL with a model of the Curiosity Rover.
Author Nancy Atkinson at JPL with a model of the Curiosity Rover.