For a supposedly dead world, Mars sure provides a lot of eye candy. The High Resolution Imaging Science Experiment (HiRise) aboard NASA’s Mars Reconnaissance Orbiter (MRO) is our candy store for stunning images of Mars. Recently, HiRise gave us this stunning image (above) of colorful, layered bedrock on the surface of Mars. Notice the dunes in the center. The colors are enhanced, which makes the images more useful scientifically, but it’s still amazing.
HiRise has done it before, of course. It’s keen vision has fed us a steady stream of downright jaw-dropping images of Elon Musk’s favorite planet. Check out this image of Gale Crater taken by HiRise to celebrate its 10 year anniversary orbiting Mars. This image was captured in March 2016.
The MRO is approaching its 11 year anniversary around Mars. It has completed over 45,000 orbits and has taken over 216,000 images. The next image is of a fresh impact crater on the Martian surface that struck the planet sometime between July 2010 and May 2012. The impact was in a dusty area, and in this color-enhanced image the fresh crater looks blue because the impact removed the red dust.
These landforms on the surface of Mars are still a bit of a mystery. It’s possible that they formed in the presence of an ancient Martian ocean, or perhaps glaciers. Whatever the case, they are mesmerizing to look at.
Many images of the Martian surface have confounded scientists, and some of them still do. But some, though they look puzzling and difficult to explain, have more prosaic explanations. The image below is a large area of intersecting sand dunes.
The surface of Mars is peppered with craters, and HiRise has imaged many of them. This double crater was caused by a meteorite that split in two before hitting the surface.
The image below shows gullies and dunes at the Russell Crater. In this image, the field of dunes is about 30 km long. This image was taken during the southern winter, when the carbon dioxide is frozen. You can see the frozen CO2 as white on the shaded side of the ridges. Scientists think that the gullies are formed when the CO2 melts in the summer.
The next image is also the Russell Crater. It’s an area of study for the HiRise team, which means more Russell eye candy for us. This images shows the dunes, CO2 frost, and dust devil tracks that punctuate the area.
One of the main geological features on Mars is the Valles Marineris, the massive canyon system that dwarfs the Grand Canyon here on Earth. HiRise captured this image of delicate dune features inside Valles Marineris.
The Mars Reconnaissance Orbiter is still going strong. In fact, it continues to act as a communications relay for surface rovers. The HiRise camera is along for the ride, and if the past is any indication, it will continue to provide astounding images of Mars.
It might sound trite to say that the Universe is full of mysteries. But it’s true.
Chief among them are things like Dark Matter, Dark Energy, and of course, our old friends the Black Holes. Black Holes may be the most interesting of them all, and the effort to understand them—and observe them—is ongoing.
That effort will be ramped up in April, when the Event Horizon Telescope (EHT) attempts to capture our first image of a Black Hole and its event horizon. The target of the EHT is none other than Sagittarius A, the monster black hole that lies in the center of our Milky Way Galaxy. Though the EHT will spend 10 days gathering the data, the actual image won’t be finished processing and available until 2018.
The EHT is not a single telescope, but a number of radio telescopes around the world all linked together. The EHT includes super-stars of the astronomy world like the Atacama Large Millimeter Array (ALMA) as well as lesser known ‘scopes like the South Pole Telescope (SPT.) Advances in very-long-baseline-interferometry (VLBI) have made it possible to connect all these telescopes together so that they act like one big ‘scope the size of Earth.
The combined power of all these telescopes is essential because even though the EHT’s target, Sagittarius A, has over 4 million times the mass of our Sun, it’s 26,000 light years away from Earth. It’s also only about 20 million km across. Huge but tiny.
The EHT is impressive for a number of reasons. In order to function, each of the component telescopes is calibrated with an atomic clock. These clocks keep time to an accuracy of about a trillionth of a second per second. The effort requires an army of hard drives, all of which will be transported via jet-liner to the Haystack Observatory at MIT for processing. That processing requires what’s called a grid computer, which is a sort of virtual super-computer comprised of 800 CPUs.
But once the EHT has done its thing, what will we see? What we might see when we finally get this image is based on the work of three big names in physics: Einstein, Schwarzschild, and Hawking.
As gas and dust approach the black hole, they speed up. They don’t just speed up a little, they speed up a lot, and that makes them emit energy, which we can see. That would be the crescent of light in the image above. The black blob would be a shadow cast over the light by the hole itself.
Einstein didn’t exactly predict the existence of Black Holes, but his theory of general relativity did. It was the work of one of his contemporaries, Karl Schwarzschild, that actually nailed down how a black hole might work. Fast forward to the 1970s and the work of Stephen Hawking, who predicted what’s known as Hawking Radiation.
Taken together, the three give us an idea of what we might see when the EHT finally captures and processes its data.
Einstein’s general relativity predicted that super massive stars would warp space-time enough that not even light could escape them. Schwarzschild’s work was based on Einstein’s equations and revealed that black holes will have event horizons. No light emitted from inside the event horizon can reach an outside observer. And Hawking Radiation is the theorized black body radiation that is predicted to be released by black holes.
The power of the EHT will help us clarify our understanding of black holes enormously. If we see what we think we’ll see, it confirms Einstein’s Theory of General Relativity, a theory which has been confirmed observationally over and over. If EHT sees something else, something we didn’t expect at all, then that means Einstein’s General Relativity got it wrong. Not only that, but it means we don’t really understand gravity.
In physics circles they say that it’s never smart to bet against Einstein. He’s been proven right time and time again. To find out if he was right again, we’ll have to wait until 2018.
We tend to lump New Zealand and Australia together. They’re similar culturally and share the same geographical position, relative to North America and Europe, anyway. But according to a new paper published in the Geological Society of America Today, it looks like New Zealand and their neighbor New Caledonia are actually their own continent: ‘Zealandia.’
Continent means something different to geographers and geologists. To be considered a geological continent, like Zealandia, the area in question has to satisfy a few conditions:
the land in question has to be higher than the ocean floor
it has to include a broad range of siliceous igneous, metamorphic, and sedimentary rocks
it has to have thicker crust than the ocean floor that surrounds it
it has to have well-defined limits, and be large enough to be considered a continent
In Geology, the first three points are well-understood. But as the authors say in the introduction to their paper, “…the last point—how “major” a piece of continental crust has to be to be called a continent—is almost never discussed… .” Since the Earth has so many micro-continents and continental fragments, defining how large something has to be to be called a continent is challenging. But the researchers did their homework.
They noted that the term “Zealandia” has been used before to describe New Zealand and surrounding regions. But the boundaries were never fully explored. 94% of this new continent is submerged, which helps explain why it’s taken this long to be identified.
Zealandia seemed to be a collection of broken pieces, but new data collected over the years has challenged that interpretation. Recent satellite data has given us new gravity and elevation maps of the seafloor. This data has shown that Zealandia is a unified region large as large as India.
“This is not a sudden discovery but a gradual realization; as recently as 10 years ago we would not have had the accumulated data or confidence in interpretation to write this paper.”
As the authors point out in their paper, it took a while to determine that Zealandia is a continent. There was no Eureka moment. “This is not a sudden discovery but a gradual realization; as recently as 10 years ago we would not have had the accumulated data or confidence in interpretation to write this paper.”
Besides satisfying our intellectual curiosity about our planet, the discovery is important for other reasons. A proper understanding of the plate structures and continental boundaries is important to other sciences, and may trigger further understandings that we can’t predict yet. It may also point to other areas of research.
Also, many treaties rely on the agreed upon delineation of maritime and continental boundaries, including rights to fish stocks and underground resources. While the recognition of Zealandia seems clear from a scientific standpoint, it remains to be seen if it will be accepted politically.
Over the past decades, scientists have wrestled with a problem involving the Big Bang Theory. The Big Bang Theory suggests that there should be three times as much lithium as we can observe. Why is there such a discrepancy between prediction and observation?
To get into that problem, let’s back up a bit.
The Big Bang Theory (BBT) is well-supported by multiple lines of evidence and theory. It’s widely accepted as the explanation for how the Universe started. Three key pieces of evidence support the BBT:
rough agreement between calculations and observations of the abundance of primordial light nuclei (Do NOT attempt to say this three times in rapid succession!)
But the BBT still has some niggling questions.
The missing lithium problem is centred around the earliest stages of the Universe: from about 10 seconds to 20 minutes after the Big Bang. The Universe was super hot and it was expanding rapidly. This was the beginning of what’s called the Photon Epoch.
At that time, atomic nuclei formed through nucleosynthesis. But the extreme heat that dominated the Universe prevented the nuclei from combining with electrons to form atoms. The Universe was a plasma of nuclei, electrons, and photons.
Only the lightest nuclei were formed during this time, including most of the helium in the Universe, and small amounts of other light nuclides, like deuterium and our friend lithium. For the most part, heavier elements weren’t formed until stars appeared, and took on the role of nucleosynthesis.
The problem is that our understanding of the Big Bang tells us that there should be three times as much lithium as there is. The BBT gets it right when it comes to other primordial nuclei. Our observations of primordial helium and deuterium match the BBT’s predictions. So far, scientists haven’t been able to resolve this inconsistency.
But a new paper from researchers in China may have solved the puzzle.
One assumption in Big Bang nucleosynthesis is that all of the nuclei are in thermodynamic equilibrium, and that their velocities conform to what’s called the classical Maxwell-Boltzmann distribution. But the Maxwell-Boltzmann describes what happens in what is called an ideal gas. Real gases can behave differently, and this is what the researchers propose: that nuclei in the plasma of the early photon period of the Universe behaved slightly differently than thought.
The authors applied what is known as non-extensive statistics to solve the problem. In the graph above, the dotted lines of the author’s model predict a lower abundance of the beryllium isotope. This is key, since beryllium decays into lithium. Also key is that the resulting amount of lithium, and of the other lighter nuclei, now all conform to the amounts predicted by the Maxwell-Boltzmann distribution. It’s a eureka moment for cosmology aficionados.
What this all means is scientists can now accurately predict the abundance in the primordial universe of the three primordial nuclei: helium, deuterium, and lithium. Without any discrepancy, and without any missing lithium.
This is how science grinds away at problems, and if the authors of the paper are correct, then it further validates the Big Bang Theory, and brings us one step closer to understanding how our Universe was formed.
The final servicing mission to the venerable Hubble Space Telescope (HST) was in 2009. The shuttle Atlantis completed that mission (STS-125,) and several components were repaired and replaced, including the installation of improved batteries. The HST is expected to function until 2030 – 2040. With the retiring of the shuttle program in 2011, it looked like the Hubble mission was destined to play itself out.
But now there’s talk of another servicing mission to the Hubble, to be performed by the Dream Chaser Space System.
The Hubble was originally deployed by the Space Shuttle Discovery in 1990. It was serviced by crew aboard the shuttles 5 times on 5 different shuttle missions. Unlike the other observatories in NASA’s Great Observatories, the Hubble was designed to be serviced during its lifetime.
Those servicing missions, which took place in 1993, 1997, 1999, 2002, and 2009, were complex missions which required coordination between the Kennedy Space Center, Johnson Space Center, and the Goddard Space Flight Center. Grasping Hubble with the robotic Canadarm and placing it inside the shuttle bay was a methodical process. So was the repair and replacement of components, and the testing of components once Hubble was removed from the cargo bay. Though complicated, these missions were ultimately successful, and the Hubble is still operating.
A future servicing mission to the Hubble would be a sort of insurance policy in case there are problems with NASA’s new flagship telescope, the James Webb Space Telescope (JWST.) The JWST is due to be launched in 2018, and its capabilities greatly exceed those of the Hubble. But the James Webb’s destination is LaGrange Point 2 (L2), a stable point in space about 1.5 million km (932,000 miles) from Earth. It will enter a halo orbit around L2, which makes a repair mission difficult. Though deployment problems with the JWST could be corrected by visiting spacecraft, the Telescope itself is not designed to be repaired like the Hubble is.
Since the JWST is risky, both in terms of its position in space and its unproven deployment method, some type of insurance policy may be needed to ensure NASA has a powerful telescope operating in space. But without Space Shuttles to visit the Hubble and extend its life, a different vehicle would have to be tasked with any potential future servicing missions. Enter the Dream Chaser Space System (DCSS).
The Dream Chaser Space System is like a smaller Space Shuttle. It can carry seven people into Low-Earth Orbit (LEO). Like the Shuttles, it then returns to Earth and lands horizontally on an airstrip. The DCSS, however, does not have a cargo bay or a robotic arm. If it were used for a Hubble repair mission, all repairs would likely have to be done during spacewalks. The DCSS is designed as a cargo and crew resupply ship for the International Space System. The much larger shuttles were designed with the Hubble in mind, as well as other tasks, like building and servicing the ISS and recovering satellites from orbit.
The DCSS is built by Sierra Nevada Corporation. It will be launched on an Atlas V rocket, and will return to Earth by gliding, where it can land on any commercial runway. The DCSS has its own reaction control system for manoeuvering in space. Like other commercial space ventures, the development of the DCSS has been partly funded by NASA.
The James Webb has a complex deployment. It will be launched on an Ariane 5 rocket, where it will be folded up in order to fit. The primary mirror on the JWST is made up of 18 segments which must unfold in three sections for the telescope to function. The telescope’s sun shield, which keeps the JWST cool, must also unfold after being deployed. Earlier in the mission, the Webb’s solar array and antennae need to be deployed.
This video shows the deployment of the JWST. It reminds one of a giant insect going through metamorphosis.
If either the mirror, the sunshield, or any of the other unfolding mechanisms fail, then a costly and problematic mission will have to be planned to correct the deployment. If some other crucial part of the telescope fails, then it probably can’t be repaired. NASA needs everything to go well.
People have been waiting for the JWST for a long time. It’s had kind of a tortured path to get this far. We all have our fingers crossed that the mission succeeds. But if there are problems, it may be up to the Hubble to keep doing what it’s always done: provide the kinds of science and stunning images that excites scientists and the rest of us about the Universe.
Supernovae are extremely energetic and dynamic events in the universe. The brightest one we’ve ever observed was discovered in 2015 and was as bright as 570 billion Suns. Their luminosity signifies their significance in the cosmos. They produce the heavy elements that make up people and planets, and their shockwaves trigger the formation of the next generation of stars.
There are about 3 supernovae every 100 hundred years in the Milky Way galaxy. Throughout human history, only a handful of supernovae have been observed. The earliest recorded supernova was observed by Chinese astronomers in 185 AD. The most famous supernova is probably SN 1054 (historic supernovae are named for the year they were observed) which created the Crab Nebula. Now, thanks to all of our telescopes and observatories, observing supernovae is fairly routine.
But one thing astronomers have never observed is the very early stages of a supernova. That changed in 2013 when, by chance, the automated Intermediate Palomar Transient Factory (IPTF) caught sight of a supernova only 3 hours old.
Spotting a supernovae in its first few hours is extremely important, because we can quickly point other ‘scopes at it and gather data about the SN’s progenitor star. In this case, according to a paper published at Nature Physics, follow-up observations revealed a surprise: SN 2013fs was surrounded by circumstellar material (CSM) that it ejected in the year prior to the supernova event. The CSM was ejected at a high rate of approximately 10 -³ solar masses per year. According to the paper, this kind of instability might be common among supernovae.
SN 2013fs was a red super-giant. Astronomers didn’t think that those types of stars ejected material prior to going supernova. But follow up observations with other telescopes showed the supernova explosion moving through a cloud of material previously ejected by a star. What this means for our understanding of supernovae isn’t clear yet, but it’s probably a game changer.
Catching the 3-hour-old SN 2013fs was an extremely lucky event. The IPTF is a fully-automated wide-field survey of the sky. It’s a system of 11 CCD’s installed on a telescope at the Palomar Observatory in California. It takes 60 second exposures at frequencies from 5 days apart to 90 seconds apart. This is what allowed it to capture SN 2013fs in its early stages.
Our understanding of supernovae is a mixture of theory and observed data. We know a lot about how they collapse, why they collapse, and what types of supernovae there are. But this is our first data point of a SN in its early hours.
SN 2013fs is 160 million light years away in a spiral-arm galaxy called NGC7610. It’s a type II supernova, meaning that it’s at least 8 times as massive as our Sun, but not more than 50 times as massive. Type II supernovae are mostly observed in the spiral arms of galaxies.
A supernova is the end state of some of the stars in the universe. But not all stars. Only massive stars can become supernova. Our own Sun is much too small.
Stars are like dynamic balancing acts between two forces: fusion and gravity.
As hydrogen is fused into helium in the center of a star, it causes enormous outward pressure in the form of photons. That is what lights and warms our planet. But stars are, of course, enormously massive. And all that mass is subject to gravity, which pulls the star’s mass inward. So the fusion and the gravity more or less balance each other out. This is called stellar equilibrium, which is the state our Sun is in, and will be in for several billion more years.
But stars don’t last forever, or rather, their hydrogen doesn’t. And once the hydrogen runs out, the star begins to change. In the case of a massive star, it begins to fuse heavier and heavier elements, until it fuses iron and nickel in its core. The fusion of iron and nickel is a natural fusion limit in a star, and once it reaches the iron and nickel fusion stage, fusion stops. We now have a star with an inert core of iron and nickel.
Now that fusion has stopped, stellar equilibrium is broken, and the enormous gravitational pressure of the star’s mass causes a collapse. This rapid collapse causes the core to heat again, which halts the collapse and causes a massive outwards shockwave. The shockwave hits the outer stellar material and blasts it out into space. Voila, a supernova.
The extremely high temperatures of the shockwave have one more important effect. It heats the stellar material outside the core, though very briefly, which allows the fusion of elements heavier than iron. This explains why the extremely heavy elements like uranium are much rarer than lighter elements. Only large enough stars that go supernova can forge the heaviest elements.
In a nutshell, that is a type II supernova, the same type found in 2013 when it was only 3 hours old. How the discovery of the CSM ejected by SN 2013fs will grow our understanding of supernovae is not fully understood.
Supernovae are fairly well-understood events, but their are still many questions surrounding them. Whether these new observations of the very earliest stages of a supernovae will answer some of our questions, or just create more unanswered questions, remains to be seen.
Venus is often described as being hell itself, because of its crushing pressure, acidic atmosphere, and extremely high temperatures. Dealing with any one of these is a significant challenge when it comes to exploring Venus. Dealing with all three is extremely daunting, as the Soviet Union discovered with their Venera landers.
Actually, dealing with the sulphuric rain is not too difficult, but the heat and the pressure on the surface of Venus are huge hurdles to exploring the planet. NASA has been working on the Venus problem, trying to develop electronics that can survive long enough to do useful science. And it looks like they’re making huge progress.
Scientists at the NASA Glenn Research Centre have demonstrated electronic circuitry that should help open up the surface of Venus to exploration.
“With further technology development, such electronics could drastically improve Venus lander designs and mission concepts, enabling the first long-duration missions to the surface of Venus,” said Phil Neudeck, lead electronics engineer for this work.
With our current technology, landers can only withstand surface conditions on Venus for a few hours. You can’t do much science in a few hours, especially when weighed against the mission cost. So increasing the survivability of a Venus lander is crucial.
With a temperature of 460 degrees Celsius (860 degrees Fahrenheit), Venus is almost twice as hot as most ovens. It’s hot enough to melt lead, in fact. Not only that, but the surface pressure on Venus is about 90 times greater than Earth’s, because the atmosphere is so dense.
To protect the electronics on previous Venus landers, they have been contained inside special vessels designed to withstand the pressure and temperature. But these vessels add a lot of mass to the mission, and make sending landers to Venus a very expensive proposition. So NASA’s work on robust electronics is super important when it comes to exploring Venus.
The team at the Glenn Research Centre has developed silicon carbide semiconductor integrated circuits (Si C IC) that are extremely robust. Two of the circuits were tested inside a special chamber designed to precisely reproduce the conditions on Venus. This chamber is called the Glenn Extreme Environments Rig (GEER.)
GEER is a special chamber that can recreate the conditions on any body in our Solar System. It’s an 800 Litre (28 cubic foot) chamber that can simulate temperatures up to 500° C (932° F), and pressures from near-vacuum to over 90 times the surface pressure of Earth. GEER can also simulate exotic atmospheres with its precision gas-mixing capabilities. It can mix very specific quantities of gases down to parts per million accuracy. For these tests, that means the unit had to reproduce an accurate recipe of CO2, N2, SO2, HF, HCl, CO, OCS, H2S, and H2O, down to very tiny quantities. And the tests were a success.
“We demonstrated vastly longer electrical operation with chips directly exposed — no cooling and no protective chip packaging — to a high-fidelity physical and chemical reproduction of Venus’ surface atmosphere,” Neudeck said. “And both integrated circuits still worked after the end of the test.”
In fact, the two circuits not only functioned after the test was completed, but they withstood Venus-like conditions for 521 hours. That’s more than 100 times longer than previous demonstrations of electronics designed for Venus missions.
The circuits themselves were originally designed to operate in the extremely high temperatures inside aircraft engines. “This work not only enables the potential for new science in extended Venus surface and other planetary exploration, but it also has potentially significant impact for a range of Earth relevant applications, such as in aircraft engines to enable new capabilities, improve operations, and reduce emissions,” said Gary Hunter, principle investigator for Venus surface electronics development.”
The chips themselves were very simple. They weren’t prototypes of any specific electronics that would be equipped on a Venus lander. What these tests showed is that the new Silicon Carbide Integrated Circuits (Si C IC) can withstand the conditions on Venus.
A host of other challenges remains when it comes to the overall success of a Venus lander. All of the equipment that has to operate there, like sensors, drills, and atmospheric samplers, still has to survive the thermal expansion from exposure to extremely high temperature. Robust new designs will be required in many cases. But this successful test of electronics that can survive without bulky, heavy, protective enclosures is definitely a leap forward.
If you’re interested in what a Venus lander might look like, check out the Venus Sail Rover concept.
Astronomers have finally observed something that was predicted but never seen: a stream of stars connecting the two Magellanic Clouds. In doing so, they began to unravel the mystery surrounding the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC). And that required the extraordinary power of the European Space Agency’s (ESA) Gaia Observatory to do it.
The Large and Small Magellanic Clouds (LMC and SMC) are dwarf galaxies to the Milky Way. The team of astronomers, led by a group at the University of Cambridge, focused on the clouds and on one particular type of very old star: RR Lyrae. RR Lyrae stars are pulsating stars that have been around since the early days of the Clouds. The Clouds have been difficult to study because they sprawl widely, but Gaia’s unique all-sky view has made this easier.
The Mystery: Mass
The Magellanic Clouds are a bit of a mystery. Astronomers want to know if our conventional theory of galaxy formation applies to them. To find out, they need to know when the Clouds first approached the Milky Way, and what their mass was at that time. The Cambridge team has uncovered some clues to help solve this mystery.
The team used Gaia to detect RR Lyrae stars, which allowed them to trace the extent of the LMC, something that has been difficult to do until Gaia came along. They found a low-luminosity halo around the LMC that stretched as far as 20 degrees. For the LMC to hold onto stars that far away means it would have to be much more massive than previously thought. In fact, the LMC might have as much as 10 percent of the mass that the Milky Way has.
The Arrival of the Magellanic Clouds
That helped astronomers answer the mass question, but to really understand the LMC and SMC, they needed to know when the clouds arrived at the Milky Way. But tracking the orbit of a satellite galaxy is impossible. They move so slowly that a human lifetime is a tiny blip compared to them. This makes their orbit essentially unobservable.
But astronomers were able to find the next best thing: the often predicted but never observed stellar stream, or bridge of stars, stretching between the two clouds.
A star stream forms when a satellite galaxy feels the gravitational pull of another body. In this case, the gravitational pull of the LMC allowed individual stars to leave the SMC and be pulled toward the LMC. The stars don’t leave at once, they leave individually over time, forming a stream, or bridge, between the two bodies. This action leaves a luminous tracing of their path over time.
The astronomers behind this study think that the bridge actually has two components: stars stripped from the SMC by the LMC, and stars stripped from the LMC by the Milky Way. This bridge of RR Lyrae stars helps them understand the history of the interactions between all three bodies.
A Bridge of Stars… and Gas
The most recent interaction between the Clouds was about 200 million years ago. At that time, the Clouds passed close by each other. This action formed not one, but two bridges: one of stars and one of gas. By measuring the offset between the star bridge and the gas bridge, they hope to narrow down the density of the corona of gas surrounding the Milky Way.
Mystery #2: The Milky Way’s Corona
The density of the Milky Way’s Galactic Corona is the second mystery that astronomers hope to solve using the Gaia Observatory.
The Galactic Corona is made up of ionised gas at very low density. This makes it very difficult to observe. But astronomers have been scrutinizing it intensely because they think the corona might harbor most of the missing baryonic matter. Everybody has heard of Dark Matter, the matter that makes up 95% of the matter in the universe. Dark Matter is something other than the normal matter that makes up familiar things like stars, planets, and us.
The other 5% of matter is baryonic matter, the familiar atoms that we all learn about. But we can only account for half of the 5% of baryonic matter that we think has to exist. The rest is called the missing baryonic matter, and astronomers think it’s probably in the galactic corona, but they’ve been unable to measure it.
Understanding the density of the Galactic Corona feeds back into understanding the Magellanic Clouds and their history. That’s because the bridges of stars and gas that formed between the Small and Large Magellanic Clouds initially moved at the same speed. But as they approached the Milky Way’s corona, the corona exerted drag on the stars and the gas. Because the stars are small and dense relative to the gas, they travelled through the corona with no change in their velocity.
But the gas behaved differently. The gas was largely neutral hydrogen, and very diffuse, and its encounter with the Milky Way’s corona slowed it down considerably. This created the offset between the two streams.
Eureka?
The team compared the current locations of the streams of gas and stars. By taking into account the density of the gas, and also how long both Clouds have been in the corona, they could then estimate the density of the corona itself.
When they did so, their results showed that the missing baryonic matter could be accounted for in the corona. Or at least a significant fraction of it could. So what’s the end result of all this work?
It looks like all this work confirms that both the Large and Small Magellanic Clouds conform to our conventional theory of galaxy formation.
The Challenger disaster is one of those things that’s etched into people’s memories. The launch and resulting explosion were broadcast live. Professional astronauts may have been prepared to accept their fate, but that doesn’t make it any less tragic.
There’ve been fitting tributes over the years, with people paying homage to the crew members who lost their lives. But a new tribute is remarkable for its simplicity. And this new tribute is all centred around a soccer ball.
Ellison Onizuka was one of the Challenger seven who perished on January 28, 1986, when the shuttle exploded 73 seconds into its flight. His daughter and other soccer players from Clear Lake High School, near NASA’s Johnson Space Center, gave Ellison a soccer ball to take into space with him. Almost unbelievably, the soccer ball was recovered among the wreckage after the crash.
The soccer ball was returned to the high school, where it was on display for the past three decades, with its meaning fading into obscurity with each passing year. Eventually, the Principal of the high school, Karen Engle, learned about the significance of the soccer ball’s history.
Because of Clear Lake High School’s close proximity to the Johnson Space Center, another astronaut now has a son attending the same school. His name is Shane Kimbrough, and he offered to carry a memento from the high school into space. That’s when Principal Engle had the idea to send the soccer ball with Kimbrough on his mission to the International Space Station.
The causes of the Challenger accident are well-known. An O-ring failed in the cold temperature, and pressurized burning gas escaped and eventually caused the failure of the external fuel tank. The resulting fiery explosion left no doubt about the fate of the people onboard the shuttle.
It’s poignant that the soccer ball got a second chance to make it into space, when the Challenger seven never will. This tribute is touching for its simplicity, and is somehow more powerful than other tributes made with fanfare and speeches.
It must be difficult for family members of the Challenger seven to see the photos and videos of the explosion. Maybe this simple image of a soccer ball floating in zero gravity will take the place of those other images.
The Challenger seven deserve to be remembered for their spirit and dedication, rather than for the explosion they died in.
These are the seven people who perished in the Challenger accident:
During its long mission to Saturn, the Cassini spacecraft has given us image after spectacular image of Saturn, its rings, and Saturn’s moons. The images of Saturn’s moon Enceladus are of particular interest when it comes to the search for life.
At first glance, Enceladus appears similar to other icy moons in our Solar System. But Cassini has shown us that Enceladus could be a cradle for extra-terrestrial life.
Our search for life in the Solar System is centred on the presence of liquid water. Maybe we don’t know for sure if liquid H2O is required for life. But the Solar System is huge, and the effort required to explore it is immense. So starting our search for life with the search for liquid water is wise. And in the search for liquid water, Enceladus is a tantalizing target.
Though Enceladus looks every bit like a frozen, lifeless world on its surface, it’s what lies beneath its frigid crust that is exciting. Enceladus appears to have a subsurface ocean, at least in it’s south polar region. And that ocean may be up to 10 km. deep.
Before we dive into that, (sorry), here are a few basic facts about Enceladus:
Enceladus is Saturn’s sixth largest moon
Enceladus is about 500 km in diameter (Earth’s Moon is 3,474 km in diameter)
Enceladus was discovered in 1789 by William Herschel
Enceladus is one of the most reflective objects in our Solar System, due to its icy surface
In 2005, Cassini first spied plumes of frozen water vapor erupting from the southern polar region. Called cryovolcanoes, subsequent study of them determined that they are the likely source of Saturn’s E Ring. The existence of these plumes led scientists to suspect that their source was a sub-surface ocean under Enceladus’ ice crust.
Finding plumes of water erupting from a moon is one thing, but it’s not just water. It’s salt water. Further study showed that the plumes also contained simple organic compounds. This advanced the idea that Enceladus could harbor life.
The geysers aren’t the only evidence for a sub-surface ocean on Enceladus. The southern polar region has a smooth surface, unlike the rest of the moon which is marked with craters. Something must have smoothed that surface, since it is next to impossible that the south polar region would be free from impact craters.
In 2005, Cassini detected a warm region in the south, much warmer than could be caused by solar radiation. The only conclusion is that Enceladus has a source of internal heating. That internal heat would create enough geologic activity to erase impact craters.
So now, two conditions for the existence of life have been met: liquid water, and heat.
The source of the heat on Enceladus was the next question facing scientists. That question is far from settled, and there could be several sources of heat operating together. Among all the possible sources for the heat, two are most intriguing when it comes to the search for life: tidal heating, and radioactive heating.
Tidal heating is a result of rotational and orbital forces. In Enceladus’ case, these forces cause friction which is dissipated as heat. This heat keeps the sub-surface ocean in liquid form, but doesn’t prevent the surface from freezing solid.
Radioactive heating is caused by the decay of radioactive isotopes. If Enceladus started out as a rocky body, and if it contained enough short-lived isotopes, then an enormous amount of heat would be produced for several million years. That action would create a rocky core surrounded by ice.
Then, if enough long-lived radioactive isotopes were present, they would continue producing heat for a much longer period of time. However, radioactive heating isn’t enough on its own. There would have to be tidal heating also.
More evidence for a large, sub-surface ocean came in 2014. Cassini and the Deep Space Network provided gravitometric measurements showing that the ocean is there. Those measurements showed that there is likely a regional, if not global, ocean some 10 km thick. Measurements also showed that the ocean is under an ice layer 30 to 40 km thick.
The discovery of a warm, salty ocean containing organic molecules is very intriguing, and has expanded our idea of what the habitable zone might be in our Solar System, and in others. Enceladus is much too distant from the Sun to rely on solar energy to sustain life. If moons can provide their own heat through tidal heating or radioactive heating, then the habitable zone in any solar system wouldn’t be determined by proximity to the star or stars at the centre.
Cassini’s mission is nearing its end, and it won’t fly by Enceladus again. It’s told us all it can about Enceladus. It’s up to future missions to expand our understanding of Enceladus.
Numerous missions have been talked about, including two that suggest flying through the plumes and sampling them. One proposal has a sample of the plumes being returned to Earth for study. Landing on Enceladus and somehow drilling through the ice remains a far-off idea better left to science fiction, at least for now.
Whether or not Enceladus can or does harbor life is a question that won’t be answered for a long time. In fact, not all scientists agree that there is a liquid ocean there at all. But whether it does or doesn’t harbor life, Cassini has allowed us to enjoy the tantalizing beauty of that distant object.