You’d have to be an intrepid explorer to investigate something named ‘Cape Tribulation’. Opportunity, NASA’s long-lived rover on Mars’ surface, has been just that. But Opportunity is now leaving Cape Tribulation behind, after being in that area since late 2014, or for about 30 months.
Cape Tribulation is the name given to a segment of crater rim at Endeavour Crater, where Opportunity has been for over 5 1/2 years. During that time, Opportunity reached some important milestones. While there, it surpassed 26 miles in distance travelled, the length of a marathon race. It also reached its highest elevation yet, and in ‘Marathon Valley’, it investigated clay outcrops seen from orbit. Opportunity also had some struggles there, when its flash memory stopped working, meaning all data had to be transmitted every day, or lost.
Before reaching Cape Tribulation 30 months ago, Opportunity investigated other parts of Endeavour Crater called “Cape York,” “Solander Point” and “Murray Ridge.”
The rover’s next destination is Perseverance Valley, where it will investigate how it was carved out billions of years ago: by water, by wind, or perhaps flowing material lubricated by water. Before leaving Cape Tribulation, Opportunity captured the panoramic image of Rochefort Ridge, a section of the Endeavour Crater rim marked by grooves on its side.”The degree of erosion at Rocheport is fascinating,” said Opportunity Deputy Principal Investigator Ray Arvidson, of Washington University in St. Louis. “Grooves run perpendicular to the crest line. They may have been carved by water or ice or wind. We want to see as many features like this on the way to Perseverance Valley as we can, for comparison with what we find there.”
Endeavour crater is about 22km in diameter, and Perseverance Valley is about 2 football fields long. The goal at Endeavour is to investigate its segmented rim, where the oldest rocks ever investigated on Mars are exposed. Since the beginning of April, Opportunity has travelled about 98 meters, to a point where Cape Tribulation meets the plain around the crater.
“From the Cape Tribulation departure point, we’ll make a beeline to the head of Perseverance Valley…” – Opportunity Deputy Principal Investigator Ray Arvidson
“From the Cape Tribulation departure point, we’ll make a beeline to the head of Perseverance Valley, then turn left and drive down the full length of the valley, if we can,” Arvidson said. “It’s what you would do if you were an astronaut arriving at a feature like this: Start at the top, looking at the source material, then proceed down the valley, looking at deposits along the way and at the bottom.”
It’s the nature of those deposits that can give vital clues to how Perseverance Valley was formed. Arvidson said, “If it was a debris flow, initiated by a little water, with lots of rocks moving downhill, it should be a jumbled mess. If it was a river cutting a channel, we may see gravel bars, crossbedding, and what’s called a ‘fining upward’ pattern of sediments, with coarsest rocks at the bottom.”
Opportunity, and its sister rover Spirit, arrived at Mars in 2004, with a planned mission length of 90 days. Opportunity has surpassed that by over 12 years, and continues to perform extremely well in the Martian environment.
One of the most common features of space exploration has been the use of disposable components to get missions to where they are going. Whether we are talking about multistage rockets (which fall away as soon as they are spent) or the hardware used to achieve Entry, Descent and Landing (EDL) onto a planet, the idea has been the same. Once the delivery mechanism is used up, it is cast away.
However, in so doing, we could be creating a hazardous situation for future missions. Such is the conclusion reached by a new study from the Finnish Meteorological Institute in Helsinki, Finland. With regard to the use of Entry, Descent and Landing (EDL) systems, the study’s author – Dr. Mark Paton – concludes that jettisoned hardware from missions to Mars could create a terrible mess near future landing sites.
Dr. Mark Paton is a planetary research scientist who specializes in the interaction between the Martian atmosphere and its surface. As such, he is well-versed in the subject of EDL systems that are designed to land missions on Solar System bodies that have atmospheres. This is certainly a going concern for Mars, where landers and rovers have relied on various means to get to the surface safely.
Consider the Curiosity rover, which used a separate EDL system – known as the Sky Crane – to land on Mars in 2012. As the first EDL system of its kind, the Sky Crane was a essentially a rocket-powered backpack mounted on top of the rover. This system kicked in after Curiosity separated from its Descent module (which was slowed by a parachute) and used rockets to slow the rover’s decent even further.
Once it was sufficiently close to the surface, the Sky Crane lowed the rover to the ground with tethers measuring 6.4 meters (21 ft) long. It then detached and landed a safe distance away, not far from the Descent module’s heat shield, backshell, and parachute landed. These jettisoned bits were all photographed from orbit by the MSL’s HiRISE instrument a day after the landing.
Unfortunately, this kind of technology does not address another major concern – which is the accumulation of spent hardware components on the surface of a planet. In time, these could pose risks for future missions, mainly because they have the potential of being blown around and cluttering up other (and future) landing sites that are located not far away.
As Dr. Paton indicated in an interview with Seeker columnist (and Universe Today alumnist) Elizabeth Howell:
“Currently available landing systems, using heat shield and parachutes, might be problematic because jettisoned hardware from these landers normally land within a few hundred meters of the lander. I would imagine a sample return mission would not jettison its parachute in close vicinity of the target sample or the cached sample. The parachute might cover the sample, making its retrieval a problem. Landers using large parachutes or other large devices probably pose the greatest risk as these could be easily blown onto equipment on the surface, damaging or covering it.”
For the sake of his study, Dr. Paton relied on 3D computer modelling (using the space flight simulator Orbiter) to examine different types of ELD systems. He then conducted meteorological measurements to determine wind speeds and direction within the Martian Planetary Boundary Layer (PBL), in order to determine their influence on the distribution of jettisoned components across the surface of Mars.
What he found was that winds speeds within the Martian PBL were sufficient enough to blow around certain types of EDL systems. This included parachutes – a mainstay of space missions – as well as next-generations concepts like the HIAC. Basically, these components could be blown onto prelanded assets, even when the lander itself has touched down several kilometers away.
This could play havoc with robotic missions that have sensitive equipment or are attempting to collect samples for return to Earth. And as for crewed missions – such as NASA’s proposed “Journey to Mars”, which is expected to take place in the 2030s – the results could be even worse. Crew habitats, which will be part of all future crewed missions, will rely on solar panels and other devices that need to be free of clutter in order to function.
As such, Dr. Paton advises that future missions be designed so that the amount of hardware they leave behind is minimized. In addition, he advises that any future missions will need to take into account meteorological measurement to make sure that jettisoned components are not likely to blow back and interfere with missions in progress.
“For new landing systems, a detailed trade-off analysis would be required to determine the best way to mitigate this problem,” he said. “To be sure that the wind is blowing away from any landed assets, the winds in the lower few kilometers of the atmosphere would ideally need to be measured close to the time of the lander’s expected arrival.”
As if planning missions to Mars wasn’t already challenging enough! In addition to all the things we need to worry about in getting there, now we need to worry about keeping our landing sites in pristine order. But of course, such considerations are understandable since our presence on Mars is expanding, and many key missions are planned for the coming years.
These include more robotic rovers in the next decade – i.e NASA’s Mars 2020 rover, the ESA’s Exomars rover, and the ISRO’s Mangalyaan 2 rover – an even NASA’s proposed “Journey to Mars” by the 2030s. If we’re going to make Mars a regular destination, we need to learn to pick up after ourselves!
Saturn’s largest Moon, Titan, is the only other world in our Solar System that has stable liquid on its surface. That alone, and the fact that the liquid is composed of methane, ethane, and nitrogen, makes it an object of fascination. The bright spot features that Cassini observed in the methane seas that dot the polar regions only deepen the fascination.
A new paper published in Nature Astronomy digs deeper into a phenomenon in Titan’s seas that has been puzzling scientists. In 2013, Cassini noticed a feature that wasn’t there on previous fly-bys of the same region. In subsequent images, the feature had disappeared again. What could it be?
One explanation is that the feature could be a disappearing island, rising and falling in the liquid. This idea took hold, but was only an initial guess. Adding to the mystery was the doubling in size of these potential islands. Others speculated that they could be waves, the first waves observed anywhere other than on Earth. Binding all of these together was the idea that the appearance and disappearance could be caused by seasonal changes on the moon.
Now, scientists at NASA’s Jet Propulsion Laboratory (JPL) think they know what’s behind these so-called ‘disappearing islands,’ and it seems like they are related to seasonal changes.
The study was led by Michael Malaska of JPL. The researchers simulated the frigid conditions on Titan, where the temperature is -179.2 Celsius. At that temperature, some interesting things happen to the nitrogen in Titan’s atmosphere.
On Titan, it rains. But the rain is composed of extremely cold methane. As that methane falls to the surface, it absorbs significant amounts of nitrogen from the atmosphere. The rain hits Titan’s surface and collects in the lakes on the moon’s polar regions.
The researchers manipulated the conditions in their experiments to mirror the changes that occur on Titan. They changed the temperature, the pressure, and the methane/ethane composition. As they did so, they found that nitrogen bubbled out of solution.
“Our experiments showed that when methane-rich liquids mix with ethane-rich ones — for example from a heavy rain, or when runoff from a methane river mixes into an ethane-rich lake — the nitrogen is less able to stay in solution,” said Michael Malaska of JPL. This release of nitrogen is called exsolution. It can occur when the seasons change on Titan, and the seas of methane and ethane experience a slight warming.
“Thanks to this work on nitrogen’s solubility, we’re now confident that bubbles could indeed form in the seas, and in fact may be more abundant than we’d expected,” said Jason Hofgartner of JPL, a co-author of the study who also works on Cassini’s radar team. These nitrogen bubbles would be very reflective, which explains why Cassini was able to see them.
The seas on Titan may be what’s called a prebiotic environment, where chemical conditions are hospitable to the appearance of life. Some think that the seas may already be home to life, though there’s no evidence of this, and Cassini wasn’t equipped to investigate that premise. Some experiments have shown that an atmosphere like Titan’s could generate complex molecules, and even the building blocks of life.
NASA and others have talked about different ways to explore Titan, including balloons, a drone, splashdown landers, and even a submarine. The submarine idea even received a NASA grant in 2015, to develop the idea further.
So, mystery solved, probably. Titan’s bright spots are neither islands nor waves, but bubbles.
Cassini’s mission will end soon, and it’ll be quite some time before Titan can be investigated further. The question of whether Titan’s seas are hospitable to the formation of life, or whether there may already be life there, will have to wait. What role the nitrogen bubbles play in Titan’s life question will also have to wait.
In 2005, the Future In-Space Operations Working Group (FISOWG) was established with the help of NASA to assess how advances in spaceflight technologies could be used to facilitate missions back to the Moon and beyond. In 2006, the FISO Working Group also established the FISO Telecon Series to conduct outreach to the public and educate them on issues pertaining to spaceflight technology, engineering, and science.
Every week, the Telecon Series holds a seminar where experts are able to share the latest news and developments from their respective fields. On Wednesday, April 19th, in a seminar titled “An Air-Breathing Metal-Combustion Power Plant for Venus in situ Exploration“, NASA engineer Michael Paul presented a novel idea where existing technology could be used to make longer-duration missions to Venus.
To recap the history of Venus exploration, very few probes have ever been able to explore its atmosphere or surface for long. Not surprising, considering that the atmospheric pressure on Venus is 92 times what it is here on Earth at sea level. Not to mention the fact that Venus is also the hottest planet in the Solar System – with average surface temperatures of 737 K (462 °C; 863.6 °F).
Hence why those few probes that actually explored the atmosphere and surface in detail – like the Soviet-era Venera probes and landers and NASA’s Pioneer Venus multiprobe – were only able to return data for a matter of hours. All other missions to Venus have either taken the form of orbiters or consisted of spacecraft conducting flybys while en route to other destinations.
Having worked in the fields of space exploration and aerospace engineering for 20 years, Michael Paul is well-versed in the challenges of mounting missions to other planets. During his time with the John Hopkins University Applied Physics Laboratory (JHUAPL), he contributed to NASA’s Contour and Stereo missions, and was also instrumental in the launch and early operations of the MESSENGER mission to Mercury.
However, it was a flagship-level study in 2008 – performed collaboratively between JHUAPL and NASA’s Jet Propulsion laboratory (JPL) – that opened his eyes to the need for missions that took advantage of the process known as In-Situ Resource Utilization (ISRU). As he stated during the seminar:
“That year we actually studied a very large mission to Europa which evolved into the current Europa Clipper mission. And we also studied a flagship mission to the Saturn, to Titan specifically. The Titan-Saturn system mission study was a real eye-opener for me in terms what could be done and why we should be doing a lot of more adventurous and more aggressive exploration of in-situ in certain places.”
The flagship mission to Titan was the subject of Paul’s work since joining Penn Sate’s Applied Research Laboratory in 2009. During his time there, he became a NASA Innovative Advanced Concepts Program (NIAC) Fellow for his co-creation of the Titan Submarine. For this mission, which will explore the methane lakes of Titan, Paul helped to develop underwater power systems that would provide energy for planetary landers that can’t see the Sun.
Having returned to JHUAPL, where he is now the Space Mission Formulation Lead, Paul continues to work on in-situ concepts that could enable missions to locations in the Solar System that present a challenge. In-situ exploration, where local resources are relied upon for various purposes, presents numerous advantages over more traditional concepts, not the least of which is cost-effectiveness.
Consider mission that rely on Multi-Mission Radioisotope Thermoelectric Generators (MMRTG) – where radioactive elements like Plutonium-238 are used to generate electricity. Whereas this type of power system – which was used by the Viking 1 and 2 landers (sent to Mars in 1979) and the more recent Curiosity rover – provides unparalleled energy density, the cost of such missions is prohibitive.
What’s more, in-situ missions could also function in places where conventional solar cells would not work. These include not only locations in the outer Solar System (i.e. Europa,Titan and Enceladus) but also places closer to home. The South Pole-Aitken Basin, for example, is a permanently shadowed location on the Moon that NASA and other space agencies are interesting in exploring (and maybe colonizing) due to the abundance of water ice there.
But there’s also the surface Venus, where sunlight is in short supply because of the planet’s dense atmosphere. As Paul explained in the course of the seminar:
“What can you do with other power systems in places where the Sun just doesn’t shine? Okay, so you want to get to the surface of Venus and last more than a couple of hours. And I think that in the last 10 or 15 years, all the missions that [were proposed] to the surface of Venus pretty much had a two-hour timeline. And those were all proposed, none of those missions were actually flown. And that’s in line with the 2 hours that the Russian landers survived when they got there, to the surface of Venus.”
The solution to this problem, as Paul sees it, is to employ a Stored-Chemical Energy and Power System (SCEPS), also known as a Sterling engine. This proven technology relies on stored chemical energy to generate electricity, and is typically used in underwater systems. But repurposed for Venus, it could provide a lander mission with a considerable amount of time (compared to previous Venus missions) with which to conduct surface studies.
For the power system Paul and his colleagues are envisioning, the Sterling engine would take solid-metal lithium (or possibly solid iodine), and then liquefy it with a pyrotechnic charge. This resulting liquid would then be fed into another chamber where it would combined with an oxidant. This would produce heat and combustion, which would then be used to boil water, spin turbines, and generate electricity.
Such a system is typically closed and produces no exhaust, which makes it very useful for underwater systems that cannot compromise their buoyancy. On Venus, such a system would allow for electrical production without short-lived batteries, an expensive nuclear fuel cell, and could function in a low solar-energy environment.
An added benefit for such a craft operating on Venus is that the oxidizer would be provided locally, thus removing the need for an heavy component. By simply letting in outside CO2 – which Venus’ atmosphere has in abundance – and combining with the system’s liquified lithium (or iodine), the SCEPS system could provide sustained energy for a period of days.
Further help came from the Glenn Research Center’s COMPASS lab, were engineers from multiple disciplines performs integrated vehicle systems analyses. From all of this, a mission concept known as the Advanced Lithium Venus Explorer (ALIVE) was developed. With the help of Steven Oleson – the head of GRC’s COMPASS lab – Paul and his team envision a mission where a lander would reach the surface of Venus and study it for 5 to 10 days.
All told, that’s an operational window of between 120 and 240 hours – in other words, 60 to 120 times as long as previous missions. However, how much such a mission would cost remains to be seen. According to Paul, that question became the basis of an ongoing debate between himself and Oleson, who disagreed as to whether it would be part of the Discovery Program or the New Frontiers Program.
As Paul explained, missions belonging to the former were recently capped at the $450 to $500 million level while the latter are capped at $850 million. “I believe that if you did this right, you could get it into a Discovery mission,” he said. “Here at APL, I’ve seen really complicated ideas fit inside a Discovery cost cap. And I believe that the way we crafted this mission, you could do this for a Discovery mission. And it would be really exciting to get that done.”
From a purely technological standpoint, this not a new idea. But in terms of space exploration, it has never been done before. Granted, there are still many tests which would need to be conducted before any a mission to Venus can be planned. In particular, there are the byproducts created by combusting lithium and CO2 under Venus-like conditions, which already produced some unexpected results during tests.
In addition, there is the problem of nitrogen gas (N2) – also present in Venus’ atmosphere – building up in the system, which would need to be vented in order to prevent a blowout. But the advantages of such a system are evident, and Paul and his colleagues are eager to take additional steps to develop it. This summer, they will be doing another test of a lithium SCEPS under the watchful eye of NAIC.
By this time next year, they hope to have completed their analysis and their design for the system, and begin building one which they hope to test in a controlled temperature environment. This will be the first step in what Paul hopes will be a three-year period of testing and development.
“The first year we’re basically going to do a lot of number crunching to make sure we got it right,” he said. “The second year we’re going to built it, and test it at higher temperatures than room temperature – but not the high temperatures of Venus! And in the third year, we’re going to do the high temperature test.”
Ultimately, the concept could be made to function in any number of high and low temperature conditions, allowing for cost-effective long-duration missions in all kinds of extreme environments. These would include Titan, Europa and Enceladus, but also Venus, the Moon, and perhaps the permanently-shadowed regions on Mercury’s poles as well.
Space exploration is always a challenge. Whenever ideas come along that make it possible to peak into more environments, and on a budget to boot, it is time to start researching and developing them!
To some, art and science are opposed to one another. Art is aesthetics, expression, and intuition, while science is all cold, hard, rational thought. But that’s a simplistic understanding. They’re both quintessential human endeavours, and they’re both part of the human spirit.
Some at NASA have always understood this, and there’s actually an interesting, collaborative history between NASA and the art world, that reaches back several decades. Not the kind of art that you see hanging in elite galleries in the world’s large cities, but the kind of art that documents achievements in space exploration, and that helps us envision what our future could be.
Back in 1962, when NASA was 4 years old, NASA administrator James Webb put the wheels in motion for a collaboration between NASA and American artists. Artist Bruce Stevenson had been commissioned to produce a portrait of Alan Shepard. Shepard, of course, was the first American in space. He piloted the first Project Mercury flight, MR-3, in 1961. When Webb saw it, he got a bright idea.
When Stevenson brought is portrait of Shepard to NASA headquarters, James Webb thought that Stevenson wanted to paint portraits of all seven Mercury astronauts. But Webb thought a group portrait would be even better. The group portrait was never produced, but it got Webb thinking. In a memo, he said “…we should consider in a deliberate way just what NASA should do in the field of fine arts to commemorate the …historic events” of the American space program.
That set in motion a framework that exists to this day. Beyond just portraits, Webb wanted artists to produce paintings that would convey the excitement around the entire endeavour of space flight, and what the deeper meaning behind it might be. He wanted artists to capture all of the excitement around the preparation and countdown for launches, and activities in space.
That’s when the NASA collaboration with artists began. A young artist named James Dean was assigned to the program, and he took a page out of the Air Force’s book, which established its own art program in 1954.
There’s a whole cast of characters involved, each one contributing to the success of the program. One such person was John Walker, Director of the National Gallery. He was enthusiastic, saying in a talk in 1965 that “the present space exploration effort by the United States will probably rank among the more important events in the history of mankind.” History has certainly proven those words to be true.
Walker went on to say that it was the artists’ job “…not only to record the physical appearance of the strange new world which space technology is creating, but to edit, select and probe for the inner meaning and emotional impact of events which may change the destiny of our race.”
And that’s what they did. Artists like Norman Rockwell, Andy Warhol, Peter Hurd, Annie Liebowitz, Robert Rauschenberg, and others, all took part in the program.
In the 1970’s, thinkers like Gerard K. O’Neill began to formulate ideas of what human colonies in space might look like. NASA held a series of conferences where these ideas were shared and explored. Artists Rick Guidice and Don Davis created many paintings and illustrations of what colony designs like Bernal Spheres, Double Cylinders, and Toroidal Colonies might look like.
NASA continues to work with artists, though the nature of the relationship has changed over the decades. Artists are often used to flesh out new discoveries when images are not available. Cassini’s so-called Grand Finale, when it will orbit between Saturn and its rings 22 times before crashing into the planet, was conceptualized by an unnamed artist.
The recent discovery of the exoplanets in the TRAPPIST-1 system was huge news. It still is. But TRAPPIST-1 is over 40 light years away, and NASA relied on artists to bring the discovery to life. This illustration was widely used to help us understand what planets orbiting the TRAPPIST-1 Red Dwarf might look like.
NASA now has quite a history of relying on art to convey what words can’t do. Space colonies, distant solar systems, and spacecraft ending their missions on other worlds, have all relied on the work of artists. But if I had to choose a favorite, it would probably be the 1981 water color by artist Henry Casselli. It makes you wonder what it’s like for an individual to take part in these species-defining endeavours. Just one person, sitting, contemplating, and preparing.
Ever since the discovery of the Higgs Boson in 2012, the Large Hadron Collider has been dedicated to searching for the existence of physics that go beyond the Standard Model. To this end, the Large Hadron Collider beauty experiment (LHCb) was established in 1995, specifically for the purpose of exploring what happened after the Big Bang that allowed matter to survive and create the Universe as we know it.
Since that time, the LHCb has been doing some rather amazing things. This includes discovering five new particles, uncovering evidence of a new manifestation of matter-antimatter asymmetry, and (most recently) discovering unusual results when monitoring beta decay. These findings, which CERN announced in a recent press release, could be an indication of new physics that are not part of the Standard Model.
In this latest study, the LHCb collaboration team noted how the decay of B0 mesons resulted in the production of an excited kaon and a pair of electrons or muons. Muons, for the record, are subatomic particles that are 200 times more massive than electrons, but whose interactions are believed to be the same as those of electrons (as far as the Standard Model is concerned).
This is what is known as “lepton universality”, which not only predicts that electrons and muons behave the same, but should be produced with the same probability – with some constraints arising from their differences in mass. However, in testing the decay of B0 mesons, the team found that the decay process produced muons with less frequency. These results were collected during Run 1 of the LHC, which ran from 2009 to 2013.
The results of these decay tests were presented on Tuesday, April 18th, at a CERN seminar, where members of the LHCb collaboration team shared their latest findings. As they indicated during the course of the seminar, these findings are significant in that they appear to confirm results obtained by the LHCb team during previous decay studies.
This is certainly exciting news, as it hints at the possibility that new physics are being observed. With the confirmation of the Standard Model (made possible with the discovery of the Higgs boson in 2012), investigating theories that go beyond this (i.e. Supersymmetry) has been a major goal of the LHC. And with its upgrades completed in 2015, it has been one of the chief aims of Run 2 (which will last until 2018).
Naturally, the LHCb team indicated that further studies will be needed before any conclusions can be drawn. For one, the discrepancy they noted between the creation of muons and electrons carries a low probability value (aka. p-value) of between 2.2. to 2.5 sigma. To put that in perspective, the first detection of the Higgs Boson occurred at a level of 5 sigma.
In addition, these results are inconsistent with previous measurements which indicated that there is indeed symmetry between electrons and muons. As a result, more decay tests will have to be conducted and more data collected before the LHCb collaboration team can say definitively whether this was a sign of new particles, or merely a statistical fluctuation in their data.
The results of this study will be soon released in a LHCb research paper. And for more information, check out the PDF version of the seminar.
Human-kind has a long history of looking up at the stars and seeing figures and faces. In fact, there’s a word for recognizing faces in natural objects: pareidolia. But this must be the first time someone has recognized Bart Simpson’s face on an object in space.
Researchers studying landslides on the dwarf planet Ceres noticed a pattern that resembles the cartoon character. The researchers, from the Georgia Institute of Technology, are studying massive landslides that occur on the surface of the icy dwarf. Their findings are reinforcing the idea that Ceres has significant quantities of frozen water.
In a new paper in the journal Nature Geoscience, the team of scientists, led by Georgia Tech Assistant Professor and Dawn Science Team Associate Britney Schmidt, examined the surface of Ceres looking for morphologies that resemble landslides here on Earth.
Research shows us that Ceres probably has a subsurface shell that is rich with water-ice. That shell is covered by a layer of silicates. Close examination of the type, and distribution, of landslides at different latitudes adds more evidence to the sub-surface ice theory.
Ceres is pretty big. At 945 km in diameter, it’s the largest object in the asteroid belt between Mars and Jupiter. It’s big enough to be rounded by its own gravity, and it actually comprises about one third of the mass of the entire asteroid belt.
The team used observations from the Dawn Framing Camera to identify three types of landslides on Ceres’ surface:
Type 1 are large, rounded features similar to glacier features in the Earth’s Arctic region. These are found mostly at high latitudes on Ceres, which is where most of the ice probably is.
Type 2 are the most common. They are thinner and longer than Type 1, and look like terrestrial avalanche deposits. They’re found mostly at mid-latitudes on Ceres. The researchers behind the study thought one of them looked like Bart Simpson’s face.
Type 3 occur mostly at low latitudes near Ceres’ equator. These are always found coming from large impact craters, and probably formed when impacts melted the sub-surface ice.
The authors of the study say that finding larger landslides further away from the equator is significant, because that’s where most of the ice is.
“Landslides cover more area in the poles than at the equator, but most surface processes generally don’t care about latitude,” said Schmidt, a faculty member in the School of Earth and Atmospheric Sciences. “That’s one reason why we think it’s ice affecting the flow processes. There’s no other good way to explain why the poles have huge, thick landslides; mid-latitudes have a mixture of sheeted and thick landslides; and low latitudes have just a few.”
Key to understanding these results is the fact that these types of processes have only been observed before on Earth and Mars. Earth, obviously, has water and ice in great abundance, and Mars has large quantities of sub-surface ice as well. “It’s just kind of fun that we see features on this small planet that remind us of those on the big planets, like Earth and Mars,” Schmidt said. “It seems more and more that Ceres is our innermost icy world.”
“These landslides offer us the opportunity to understand what’s happening in the upper few kilometers of Ceres,” said Georgia Tech Ph.D. student Heather Chilton, a co-author on the paper. “That’s a sweet spot between information about the upper meter or so provided by the GRaND (Gamma Ray and Neutron Detector) and VIR (Visible and Infrared Spectrometer) instrument data, and the tens of kilometers-deep structure elucidated by crater studies.”
It’s not just the presence of these landslides, but the frequency of them, that upholds the icy-mantle idea on Ceres. The study showed that 20% to 30% of craters on Ceres larger than 10 km have some type of landslide. The researchers say that upper layers of Ceres’ could be up to 50% ice by volume.
Forty years ago, Canadian physicist Bill Unruh made a surprising prediction regarding quantum field theory. Known as the Unruh effect, his theory predicted that an accelerating observer would be bathed in blackbody radiation, whereas an inertial observer would be exposed to none. What better way to mark the 40th anniversary of this theory than to consider how it could affect human beings attempting relativistic space travel?
Such was the intent behind a new study by a team of researchers from Sao Paulo, Brazil. In essence, they consider how the Unruh effect could be confirmed using a simple experiment that relies on existing technology. Not only would this experiment prove once and for all if the Unruh effect is real, it could also help us plan for the day when interstellar travel becomes a reality.
To put it in layman’s terms, Einstein’s Theory of Relativity states that time and space are dependent upon the inertial reference frame of the observer. Consistent with this is the theory that if an observer is traveling at a constant speed through empty vacuum, they will find that the temperature of said vacuum is absolute zero. But if they were to begin to accelerate, the temperature of the empty space would become hotter.
This is what William Unruh – a theorist from the University of British Columbia (UBC), Vancouver – asserted in 1976. According to his theory, an observer accelerating through space would be subject to a “thermal bath” – i.e. photons and other particles – which would intensify the more they accelerated. Unfortunately, no one has ever been able to measure this effect, since no spacecraft exists that can achieve the kind of speeds necessary.
For the sake of their study – which was recently published in the journal Physical Review Letters under the title “Virtual observation of the Unruh effect” – the research team proposed a simple experiment to test for the Unruh effect. Led by Gabriel Cozzella of the Institute of Theoretical Physics (IFT) at Sao Paulo State University, they claim that this experiment would settle the issue by measuring an already-understood electromagnetic phenomenon.
Essentially, they argue that it would be possible to detect the Unruh effect by measuring what is known as Larmor radiation. This refers to the electromagnetic energy that is radiated away from charged particles (such as electrons, protons or ions) when they accelerate. As they state in their study:
“A more promising strategy consists of seeking for fingerprints of the Unruh effect in the radiation emitted by accelerated charges. Accelerated charges should back react due to radiation emission, quivering accordingly. Such a quivering would be naturally interpreted by Rindler observers as a consequence of the charge interaction with the photons of the Unruh thermal bath.”
As they describe in their paper, this would consist of monitoring the light emitted by electrons within two separate reference frames. In the first, known as the “accelerating frame”, electrons are fired laterally across a magnetic field, which would cause the electrons to move in a circular pattern. In the second, the “laboratory frame”, a vertical field is applied to accelerate the electrons upwards, causing them to follow a corkscrew-like path.
In the accelerating frame, Cozzella and his colleagues assume that the electrons would encounter the “fog of photons”, where they both radiate and emit them. In the laboratory frame, the electrons would heat up once vertical acceleration was applied, causing them to show an excess of long-wavelength photons. However, this would be dependent on the “fog” existing in the accelerated frame to begin with.
In short, this experiment offers a simple test which could determine whether or not the Unruh effect exists, which is something that has been in dispute ever since it was proposed. One of the beauties of the proposed experiment is that it could be conducted using particle accelerators and electromagnets that are currently available.
On the other side of the debate are those who claim that the Unruh effect is due to a mathematical error made by Unruh and his colleagues. For those individuals, this experiment is useful because it would effectively debunk this theory. Regardless, Cozzella and his team are confident their proposed experiment will yield positive results.
“We have proposed a simple experiment where the presence of the Unruh thermal bath is codified in the Larmor radiation emitted from an accelerated charge,” they state. “Then, we carried out a straightforward classical-electrodynamics calculation (checked by a quantum-field-theory one) to confirm it by ourselves. Unless one challenges classical electrodynamics, our results must be virtually considered as an observation of the Unruh effect.”
If the experiments should prove successful, and the Unruh effect is proven to exist, it would certainly have consequences for any future deep-space missions that rely on advanced propulsion systems. Between Project Starshot, and any proposed mission that would involve sending a crew to another star system, the added effects of a “fog of photons” and a “thermal bath” will need to be factored in.
When the Apollo astronauts returned to Earth, they came bearing 380.96 kilograms (839.87 lb) of Moon rocks. From the study of these samples, scientists learned a great deal about the Moon’s composition, as well as its history of formation and evolution. For example, the fact that some of these rocks were magnetized revealed that roughly 3 billion years ago, the Moon had a magnetic field.
Much like Earth, this field would have been the result of a dynamo effect in the Moon’s core. But until recently, scientists have been unable to explain how the Moon could maintain such a dynamo effect for so long. But thanks to a new study by a team of scientists from the Astromaterials Research and Exploration Science (ARES) Division at NASA’s Johnson Space Center, we might finally have a answer.
To recap, the Earth’s magnetic core is an integral part of what keeps our planet habitable. Believed to be the result of a liquid outer core that rotates in the opposite direction as the planet, this field protects the surface from much of the Sun’s radiation. It also ensures that our atmosphere is not slowly stripped away by solar wind, which is what happened with Mars.
For the sake of their study, which was recently published in the journal Earth and Planetary Science Letters, the ARES team sought to determine how a molten, churning core could generate a magnetic field on the Moon. While scientists have understood how the Moon’s core could have powered such a field in the past, they have been unclear as to how it could have been maintained it for such a long time.
Towards this end, the ARES team considered multiple lines of geochemical and geophysical evidence to put constraints on the core’s composition. As Kevin Righter, the lead of the JSC’s high pressure experimental petrology lab and the lead author of the study, explained in a NASA press release:
“Our work ties together physical and chemical constraints and helps us understand how the moon acquired and maintained its magnetic field – a difficult problem to tackle for any inner solar system body. We created several synthetic core compositions based on the latest geochemical data from the moon, and equilibrated them at the pressures and temperatures of the lunar interior.”
Specifically, the ARES scientists conducted simulations of how the core would have evolved over time, based on varying levels of nickel, sulfur and carbon content. This consisted of preparing powders or iron, nickel, sulfur and carbon and mixing them in the proper proportions – based on recent analyses of Apollo rock samples.
Once these mixtures were prepared, they subjected them to heat and pressure conditions consistent with what exists at the Moon’s core. They also varied these temperatures and pressures based on the possibility that the Moon underwent changes in temperature during its early and later history – i.e. hotter during its early history and cooler later on.
What they found was that a lunar core composed of iron/nickel that had a small amount of sulfur and carbon – specifically 0.5% sulfur and 0.375% carbon by weight – fit the bill. Such a core would have a high melting point and would have likely started crystallizing early in the Moon’s history, thus providing the necessary heat to drive the dynamo and power a lunar magnetic field.
This field would have eventually died out after heat flow led the core to cool, thus arresting the dynamo effect. Not only do these results provide an explanation for all the paleomagnetic and seismic data we currently have on the Moon, it is also consistent with everything we know about the Moon’s geochemical and geophysical makeup.
Prior to this, core models tended to place the Moon’s sulfur content much higher. This would mean that it had a much lower melting point, and would have meant crystallization could not have occurred until much more recently in its history. Other theories have been proposed, ranging from sheer forces to impacts providing the necessary heat to power a dynamo.
However, the ARES team’s study provides a much simpler explanation, and one which happens to fit with all that we know about the Moon. Naturally, additional studies will be needed before there is any certainty on the issue. No doubt, this will first require that human beings establish a permanent outpost on the Moon to conduct research.
But it appears that for the time being, one of the deeper mysteries of the Earth-Moon system might be resolved at last.
NASA strives to explore space and to expand our understanding of our Solar System and beyond. But they also turn their keen eyes on Earth in an effort to understand how our planet is doing. Now, they’re releasing a new composite image of Earth at night, the first one since 2012.
We’ve grown accustomed to seeing these types of images in our social media feeds, especially night-time views of Earth from the International Space Station. But this new image is much more than that. It’s part of a whole project that will allow scientists—and the rest of us—to study Earth at night in unprecedented detail.
Night-time views of Earth have been around for 25 years or so, usually produced several years apart. Comparing those images shows clearly how humans are changing the face of the planet. Scientists have been refining the imaging over the years, producing better and more detailed images.
The team behind this is led by Miguel Román of NASA’s Goddard Space Flight Center. They’ve been analyzing data and working on new software and algorithms to improve the quality, clarity, and availability of the images.
This new work stems from a collaboration between the National Oceanic and Atmospheric Administration (NOAA) and NASA. In 2011, NASA and NOAA launched a satellite called the Suomi National Polar-orbiting Partnership (NPP) satellite. The key instrument on that satellite is the Visible Infrared Imaging Radiometer Suite (VIIRS), a 275 kg piece of equipment that is a big step forward in Earth observation.
VIIRS detects photons of light in 22 different wavelengths. It’s the first satellite instrument to make quantitative measurements of light emissions and reflections, which allows researchers to distinguish the intensity, types and the sources of night lights over several years.
Producing these types of maps is challenging. The raw data from SUOMI NPP and its VIIRS instrument has to be skillfully manipulated to get these images. The main challenge is the Moon itself.
As the Moon goes through its different phases, the amount of light hitting Earth is constantly changing. Those changes are predictable, but they still have to be accounted for. Other factors have to be managed as well, like seasonal vegetation, clouds, aerosols, and snow and ice cover. Other changes in the atmosphere, though faint, also affect the outcome. Phenomenon like auroras change the way that light is observed in different parts of the world.
The newly released maps were made from data throughout the year, and the team developed algorithms and code that picked the clearest night views each month, ultimately combining moonlight-free and moonlight-corrected data.
The SUOMI NPP satellite is in a polar orbit, and it observes the planet in vertical swaths that are about 3,000 km wide. With its VIIRS instrument, it images almost every location on the surface of the Earth, every day. VIIRS low-light sensor has six times better spatial resolution for distinguishing night lights, and 250 times better resolution overall than previous satellites.
What do all those numbers mean? The team hopes that their new techniques, combined with the power of VIIRS, will create images with extraordinary resolution: the ability to distinguish a single highway lamp, or fishing boat, anywhere on the surface of Earth.
Beyond thought-provoking eye-candy for the rest of us, these images of night-time Earth have practical benefits to researchers and planners.
“Thanks to VIIRS, we can now monitor short-term changes caused by disturbances in power delivery, such as conflict, storms, earthquakes and brownouts,” said Román. “We can monitor cyclical changes driven by reoccurring human activities such as holiday lighting and seasonal migrations. We can also monitor gradual changes driven by urbanization, out-migration, economic changes, and electrification. The fact that we can track all these different aspects at the heart of what defines a city is simply mind-boggling.”
These maps of night-time Earth are a powerful tool. But the newest development will be a game-changer: Román and his team aim to provide daily, high-definition views of Earth at night. Daily updates will allow real-time tracking of changes on Earth’s surface in a way never before possible.
Maybe the best thing about these upcoming daily night-time light maps is that they will be publicly available. The SUOMI NPP satellite is not military and its data is not classified in any way. They hope to have these daily images available later this year. Once the new daily light-maps of Earth are available, it’ll be another powerful tool in the hands of researchers and planners, and the rest of us.
These maps will join other endeavours like NASA-EOSDIS Worldview. Worldview is a fascinating, easy-to-use data tool that anyone can access. It allows users to look at satellite images of the Earth with user-selected layers for things like dust, smoke, draught, fires, and storms. It’s a powerful tool that can change how you understand the world.