Send Your Name to the Moon

The Lunar Reconnaissance Orbiter - artistic impression (NASA)

Have you ever dreamt of travelling to the Moon? Unfortunately, for the time being, this will be a privilege only for an elite few astronauts and robotic explorers. But NASA has just released news that you will have the opportunity to send your name to the Moon on board their next big Moon mission, the Lunar Reconnaissance Orbiter. So get over to the mission site and send your name that will be embedded into a computer chip, allowing a small part of you to orbit our natural satellite over 360,000 km (220 000 miles) away…

Last month I looked into how long it would take to travel to the Moon and the results were wide-ranging. From an impressive eight hour zip past the Moon by the Pluto mission, New Horizons to a slow spiral route taken by the SMART-1 lunar probe, taking over a year. The next NASA mission, the Lunar Reconnaissance Orbiter (LRO), is likely to take about four days (just a little longer than the manned Apollo 11 mission in July 1969). It is scheduled for launch on an Atlas V 401 rocket in late 2008 and the mission is expected to last for about a year.

Thank goodness we’re not travelling by car, according to the LRO mission facts page, it would take 135 days (that’s nearly 5 months!) to get there when travelling at an average speed of 70 miles per hour.

The LRO is another step toward building a Moon base (by 2020), the stepping stone toward colonizing Mars. The craft will orbit the Moon at an altitude of 50 km (31 miles), taking global data, constructing temperature maps, high-resolution colour imaging and measuring the Moon’s albedo. Of course, like all planetary missions to the Moon and Mars, the LRO will look out for water. As the Moon will likely become mankind’s first extra-terrestrial settlement, looking for the location of Moon water will be paramount when considering possible locations for colonization.

This is all exciting stuff, but what can we do apart from watch the LRO launch and begin sending back data? Wouldn’t it be nice if we could somehow get involved? Although we’re not going to be asked to help out at mission control any time soon, NASA is offering us the chance to send our names to the Moon. But how can this be done? First things first, watch the NASA trailer, and then follow these instructions:

  1. Go to “NASA’s Return to the Moon” page.
  2. Type in your first name and last name.
  3. Click “continue” and download your certificate – your name is going to the Moon!

My LRO Certificate - My name is going to the Moon!

But how will your name be taken to the Moon? It won’t be engraved into the LRO’s bodywork (although that would have been nice!); it will be held on a microchip embedded into the spaceship’s circuitry. The database of names will be taken on board the LRO and will remain with it for the entire duration of the mission. Anyone who submits their name will be exploring the Moon in their own small way. I’ve signed up (see my certificate, pictured) and you have until June 27, 2008 to do the same.

Will see you on board the LRO!

Sources: LRO mission site, Press release

Searching for Water and Minerals on Mars – Implications for Colonization

A Vastitas Borealis crater plus ice in the north polar region - reconstructed image by HRSC (ESA)

New results from the The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) on board the Mars Reconnaissance Orbiter (MRO) reveal the mineral composition of the bottom of Chandor Chasma. There is a rich mix of sulfate and pyroxene-containing deposits in this region and the CRISM instrument continues to find deposits of minerals never thought to exist on the planet’s surface. However, the primary mission objective is to find evidence of water, past and present, aiding the hunt for the best location of the first Mars settlements. SETI Institute principal investigator and CRISM scientist Dr Adrian Brown answers some of my questions about CRISM and how the results may be useful for future manned missions to the Red Planet…

Part of my duties as Communications Officer with the Mars Foundation (a non-profit organization for Mars settlement designers) is to contact and interview key mission scientists working on missions that could be useful for getting us closer to realising the first manned settlement on the Red Planet. Dr Adrian Brown is one such scientist; the CRISM instrument is one such mission. CRISM, an advanced spectrometer, has been looking out for the mineral fingerprint of water since 2006. Minerals will have dissolved in ancient liquid water, so looking for the dry remnants of these minerals today will help to reveal the surface conditions of the past. Another mission objective is to characterize present water on Mars, seeing how surface water ice forms and how it varies with the seasons. Dr Brown’s current project is to map the seasonal variations of water ice in the Martian southern polar region.

Sulfate- and pyroxene-containing deposits in the Candor Chasma region of Mars (NASA/JPL/JHUAPL/ASU)

In a timely news release, the CRISM mission site has announced new results to come from the analysis of the mineral distribution at the bottom of Candor Chasma (pictured), part of the vast Valles Marineris. Candor Chasma is a deep, long and steep-sided valley about 813 km (505 miles) long and has been cited as a possible location for the Hillside Settlement concept as conceived by the Mars Foundation. In fact, this settlement concept was the inspiration behind the first permanent settlement aptly called “Underhill” in Kim Stanley Robinson’s epic novel Red Mars. So, there is obvious interest as to what Candor Chasma can offer the colonists inhabiting the Hillside Settlement with easy access to locally mined minerals.

The CRISM instrument has discovered quantities of sulfate and pyroxene rich deposits in the region, useful for many industrial processes. In our interview, Dr Brown outlined other important minerals that CRISM has found and some of their common uses here on Earth:

These [minerals] include kaolinite (chinaware is made of this mineral), talc (the main constituent of many soaps) and hydrated silica (perhaps like chert, which Indian knives were carved out from). The small amounts of these minerals means it has been impossible to discover them before CRISM, and previously they were discounted in all our modelling of Mars.” – Dr Adrian Brown, SETI Institute principal investigator and CRISM scientist.

For me, the most revealing part of our conversation was Brown’s estimate on the sheer quantity of water held as ice in the north polar cap. The north pole hides under a 1000 km (620 mile) diameter disk of near-pure water ice (with some impurities like sand and dust, giving a pink hue). This disk is 3 km (1.9 miles) high, holding staggering 2.35 million cubic kilometers of water. That’s enough water to cover the continental US to a depth of 200 meters! Throw in the water that is held at the south pole (a carbon dioxide/water ice disk 300 km in diameter and 2 km high) and we’re looking at the equivalent volume of water ice held in the Greenland ice sheet (or 500 times less than the amount of water in our oceans). It’s not that hard to imagine that if a permanent Mars colony is established, mining operations for water ice would be common.

Turning on the Tap - Commissioned artwork - Colonist tapping into a sub-surface aquifer (©Mars Foundation)

But it doesn’t stop there; water could also be extracted from the atmosphere. One of Dr Brown’s studies focus on measuring the variation of water ice crystals in the clouds throughout the seasons. There should also be quantities of water vapour in the warmer equatorial regions.

There is also the possibility of extracting water from the permafrost layers below the Martian regolith. The Phoenix Mars lander (set to arrive at the Red Planet on May 25th) will be able to investigate the possibility of sources of frozen water below the surface. Dr Brown also indicated that the observations by the Mars Orbital Camera (on board NASA’s Mars Global Surveyor, lost in November 2006) of apparent gullies may reveal the location of possible sub-surface aquifers (after gushing across the surface) for future colonists to “tap” into (pictured). However, there have been studies that dispute this in favour of dry debris flows creating the gullies, but a definitive answer will not be arrived at until the gullies are analysed in-situ. And if he had the chance, I think Dr Brown would be the first to look into this exciting possibility after I asked him the question: Would you like to go to Mars?

Of course I would love to travel to Mars, most of all to go to the polar regions and observe them with my own eyes. If I could actually go to the surface of Mars to investigate the fascinating geology of Nili Fossae and Valles Marineris, that would be so awesome. And to visit a gully site and dig behind it to try and find its source… and to witness the cold volcanoes of mud that erupt in the polar cryptic region during springtime… to go and understand these things that have us puzzled at the moment would be so amazing… and of course more questions would be raised, more geological problems unearthed, and the cycle of understanding the Red Planet would continue.” – Dr Adrian Brown

I share his enthusiasm and look forward to more discoveries by CRISM.

For more on Dr Adrian Brown’s work, check out his website: http://abrown.seti.org/

Sources: The Mars Foundation, CRISM

Could Jupiter Wreck the Solar System?

Could Jupiter throw the planets into eachother? (NASA)

Scientists have expressed their concern that the Solar System may not be as stable as it seems. Happily orbiting the Sun, the eight planets (plus Pluto and other minor planets) appear to have a high degree of long-term gravitational stability. But Jupiter has a huge gravitational influence over its siblings, especially the smaller planets. It appears that the long-term prospects for the smallest planet are bleak. The huge gravitational pull of Jupiter seems to be bullying Mercury into an increasingly eccentric death-orbit, possibly flinging the cosmic lightweight into the path of Venus. To make things worse, there might be dire consequences for Earth…

Jupiter appears to be causing some planetary trouble. This gas giant orbits the Sun at a distance of approximately 5 AU (748 million km), that’s five times further away from the Sun than the Earth. Although the distance may be huge, this 318 Earth-mass planet’s gravitational pull is very important to the inner solar system planets, including tiny Mercury. Mercury orbits the Sun in an elliptical orbit, ranging between 0.47 AU (at aphelion) to 0.31 AU (at perihelion) and is only 0.055 Earth masses (that’s barely five-times the mass of our Moon).

Running long-term simulations on the orbits of our Solar System bodies, scientists in France and California have discovered something quite unsettling. Jacques Laskar of the Paris Observatory, as well as Konstantin Batygin and Gregory Laughlin of the University of California, Santa Cruz have found that Jupiter’s gravity may perturb Mercury’s eccentric orbit even more. So much so their simulation predicts that Mercury’s orbit may extend into the path of Venus; or it might simply fall into the Sun. The researchers formulate four possible scenarios as to what may happen as Mercury gets disturbed:

  1. Mercury will crash into the Sun
  2. Mercury will be ejected from the solar system altogether
  3. Mercury will crash into Venus
  4. Mercury will crash into Earth

The last option is obviously the worst case scenario for us, but all will be bad news for Mercury, the small planet’s fate appears to be sealed. So what’s the likelihood Mercury could crash into the Earth? If it did, the asteroid that most likely wiped out the dinosaurs will seem like a drop in the ocean compared with a planet 4880 km in diameter slamming into us. There will be very little left after this wrecking ball impact.

But here’s the kicker: There is only a 1% chance that these gravitational instabilities of the inner Solar System are likely to cause any kind of chaos before the Sun turns into a Red Giant and swallows Mercury, Venus, Earth and Mars in 7 billion years time. So, no need to look out for death-wish Mercury quite yet… there’s a very low chance that any of this will happen. But some good news for Mars; the researchers have also found that if the chaos does ensue, the Red Planet may be flung out of the Solar System, possibly escaping our expanding Sun. So, let’s get those Mars colonies started! Well, within the next few billions of years anyhow…

These results by Batygin and Laughlin will be published in The Astrophysical Journal.

Source: Daily Galaxy

Here are some facts on Mercury.

Ocean Currents May Cool the Climate for a Decade

False-color image of the temperature of the Gulf Stream off the East Coast of the US (NASA)

It would appear that rising atmospheric temperatures may be slowed or even stopped over the next ten years due to periodic changes in ocean circulation. As the Gulf Stream slows the flow of warm tropical waters from the equator to the North Atlantic, North America and Northern Europe will experience a slight reduction in atmospheric temperatures. This appears to be a natural process that has occurred in historic records. But don’t go getting too excited, this will only pause the global warming trend at best. The UN’s Intergovernmental Panel on Climate Change (IPCC) forecasts a global temperature rise of 0.2°C (0.36°F) per decade, and this trend will continue after the currents have settled…

The oceans are the planets huge heaters and refrigerators. Within the oceans are complex and highly dynamic flows of warm and cool streams. One stream in particular, the Gulf Stream, reaches from the tropical waters of the Gulf of Mexico to the cold waters of Northern Europe. As the tropical stream of water travels north and cools, it sinks and flows back in the opposite direction, carrying the cold North Atlantic water south. This ocean “conveyor belt” maintains the surprisingly warm weather systems that Europe experiences. Without this supply of ocean heat, countries at these high latitudes (like the UK where weather systems are dominated by ocean conditions) would experience the harsh winters more associated with Moscow.

So, in research published in Nature on Thursday, it would seem the North Atlantic is about to get a little cooler. Mojib Latif, professor at the Leibniz Institute of Marine Sciences in Kiel, northern Germany and his team predict a cooling in North American and European regions, whilst the temperatures of tropical regions will be stabilized. Scientists have known about the weakening of the Gulf Stream for a long time, but this is one of the first studies to demonstrate how this process may influence global temperatures and how global warming isn’t necessarily a gradual increase. But there’s a catch. This trend can only be sustained for ten years, after which atmospheric global warming will continue to increase at the IPCC rate. The German scientists are clear that they are not disputing the IPCC figures:

Just to make things clear, we are not stating that anthropogenic [man-made] climate change won’t be as bad as previously thought. What we are saying is that on top of the warming trend, there is a long-periodic oscillation that will probably lead to a lower temperature increase than we would expect from the current trend during the next years.” – Mojib Latif

This work predicts that the Gulf Stream will slow over the next few years, but other studies argue change is happening now. The saltiness of the Atlantic waters is also a concern. Due to the huge input of fresh melt water from Greenland’s glaciers and Siberian permafrost over the past few years, the stream has been strongly affected. It would appear there are many factors when considering how these vast currents can be influenced.

There is a warning in this new study. The weakening of the Gulf Stream is part of a natural oscillation. We may be facing a weakened stream over the next ten years, cooling the climate, but there will also be a strengthening of ocean currents in the future. What happens when the stronger currents begin heating North Atlantic waters?

Source: Physorg.com

Explore Earth’s Ionosphere with Google Earth

Computer generated image of the density of electrons in the ionosphere (Cathryn Mitchell, University of Bath)

The ionosphere is the final layer of atmosphere before space. This highly dynamic region is constantly exposed to the full intensity of the Sun, harsh ultraviolet radiation breaking down molecules and atoms. Highly charged ions and free electrons therefore fill the ionospheric layers. Critical to terrestrial communications, the ionosphere also plays host to the largest lightshow on Earth, the Aurora. Now NASA-funded research has developed a live “4D Ionosphere” plugin for Google Earth. Now you can fly through the atmosphere’s uppermost reaches without even leaving your desk…

The ionosphere is highly important to us. Radio operators will be acutely aware about how the ionosphere influences radio wave propagation. Ever since Guglielmo Marconi’s experiments with trans-Atlantic radio communications in 1901 between England and the US, the ionosphere has influenced our ability to communicate over large distances, and without the aid of modern satellite technology. The ionosphere creates a charged, reflective barrier that radio waves can be bounced off (bypassing the blocking effect of the curvature of the Earth). However, radio signals are highly influenced by variations in the ionosphere and can be “blacked out” should a major solar storm pump charged particles into the magnetosphere and ionosphere. Even modern Global Positioning Satellite (GPS) signals are influenced by this atmospheric layer, reflecting and attenuating radio waves. As aircraft, ships and other modes of transport now depend on GPS positioning, it is essential that we fully comprehend the physics behind the ionosphere.

A screenshot of Google Earth, with ionosphere overlaid (Google)

In the aim to have a better grasp of the state of the ionosphere, a “live” plugin for Google Earth has just been announced. Funded by NASA’s Living With a Star (LWS) program, it is hoped that this tool can be used by the public and professionals alike to see the current state of the electron content of the ionosphere. Once downloaded and running, the viewer can rotate the globe and see where electron density is high and where it is low. In dense regions, it is very hard for radio waves to propagate, signifying that radio quality will be poor, or blocked all together. In Google Earth, these regions are highlighted in red. The blue regions show “normal” radio propagation regions, expect good quality signal in those locations.

The reason why this new system has been dubbed “4D Ionosphere” is that you can view the ionosphere in three spatial dimensions, and the data is refreshed every ten minutes to give the extra time dimension.

This isn’t the first time Google Earth has been used by organizations for space-based research. On February 24th, I reported that a plugin had been released to track the space debris currently orbiting our planet. Nancy also gave the new Google Sky a test drive in March, a great way to learn about astronomy through this user friendly interface.

I can see lots of applications for this tool already. Firstly I’d be very excited to compare the ionosphere during periods of high solar activity with periods when the Sun experiences solar minimum (like now). This would be especially exciting in Polar Regions in the auroral zone when high quantities of solar wind particles ignite aurorae. Also, there are possible applications for amateur radio (ham) operators who could use this as a means to forecast the strength of the radio signal during campaigns. I am however uncertain how accurate or how detailed these measurements will be, but it at least gives a very interesting look into the current state of this interesting region of the atmosphere.

Source: NASA

Gaia Hypothesis: Could Earth Really be a Single Organism?

The Earth as viewed from the ISS (NASA)

Can a planet like Earth be considered a single living organism? After all, the human body is composed of hundreds of billions of bacteria, and yet we consider the human body to be a single organism. The Gaia Hypothesis (or popularly known as “Gaia Theory”) goes beyond the individual organisms living on Earth, it encompasses all the living and non-living components of Earth’s biosphere and proposes that the complex interacting systems regulate the environment to a very high degree (here’s a biosphere definition). So much so, that the planet may be viewed as a single organism in its own right. What’s more this hypothesis was developed by a NASA scientist who was looking for life on Mars…

When you stop to think about it, our planet does act like a huge organism. If you look at the interrelationship between plants and atmospherics, animals and humans, rocks and water, a complex pattern of symbiotic processes seem to complement each other perfectly. Should one system be pushed out of balance by some external force (such as a massive injection of atmospheric carbon dioxide after a volcanic event), other processes are stimulated to counteract the instability (more phytoplankton appear in the oceans to absorb the carbon dioxide in the water). Many of these processes could be interpreted as a “global immune system”.

James Lovelock (Guardian.co.uk)

The hypothesis that our planet could be a huge organism was the brain child of British scientist Dr James Lovelock. In the 1960’s when Lovelock was working with NASA on methods to detect life on the surface of Mars, his hypothesis came about when trying to explain why Earth has such high levels of carbon dioxide and nitrogen. Lovelock recently defined Gaia as:

…organisms and their material environment evolve as a single coupled system, from which emerges the sustained self-regulation of climate and chemistry at a habitable state for whatever is the current biota.” – Lovelock J. (2003) The living Earth. Nature 426, 769-770.

So, Lovelock’s work points to interwoven ecological systems that promote the development of life currently living on Earth. Naturally, the statement that Earth itself is actually one living organism encompassing the small-scale mechanisms we experience within our biosphere is a highly controversial one, but there are some experiments and tests that have been carried out to support his theory. Probably the most famous model of the Gaia hypothesis is the development of the “Daisyworld” simulation. Daisyworld is an imaginary planet whose surface is either covered in white daisies, black daisies or nothing at all. This imaginary world orbits a sun, providing the only source of energy for the daisies to grow. Black daisies have a very low albedo (i.e. they do not reflect the sun’s light), thereby getting hot and heating up the atmosphere surrounding them. White daisies have a high albedo, reflecting all the light back out of the atmosphere. The White daisies stay cool and do not contribute to atmospheric warming.
Java applet of the Daisyworld simulation »

When this basic computer simulation runs, a rather complex picture emerges. In the aim of optimizing the growth of daisies on Daisyworld, the populations of white and black daisies fluctuate, regulating the atmospheric temperatures. When the simulation starts, there are huge changes in population and temperature, but the system quickly stabilizes. Should the solar irradiance suddenly change, the white:black daisy ratio compensates to stabilize atmospheric temperatures once more. The simulated Daisyworld plants are self-regulating atmospheric temperature, optimizing their growth.

This is an oversimplified view of might be happening on Earth, but it demonstrates the principal argument that Gaia is a collection of self-regulating systems. Gaia helps to explain why atmospheric gas quantities have remained fairly constant since life formed on Earth. Before life appeared on our planet 2.5 billion years ago, the atmosphere was dominated by carbon dioxide. Life quickly adapted to absorb this atmospheric gas, generating nitrogen (from bacteria) and oxygen (from photosynthesis). Since then, the atmospheric components have been tightly regulated to optimize conditions for the biomass. Could it also explain why the oceans aren’t too salty? Possibly.

This self-regulatory system is not a conscious process; it is simply a collection of feedback loops, all working to optimize life on Earth. The hypothesis also does not interfere with the evolution of species or does it point to a “creator”. In its moderate form, Gaia is a way of looking on the dynamic processes on our planet, providing an insight to how the seemingly disparate physical and biological processes are actually interlinked. As to whether Gaia exists as an organism in it’s own right, it depends on your definition of “organism” (the fact that Gaia cannot reproduce itself is a major drawback for viewing Earth as an organism), but it certainly makes you think…

Original source: Guardian

Titan Launch Pad Tower Blown Up at Cape Canaveral (Gallery)

The demolition of the old Titan pad gantry, photo sequence (Chris Miller/Spaceflight Now)

Cape Canaveral’s Titan launch pad gantry was demolished on Sunday. The tower was built in the 1990’s to support the US Air Force Titan 4 rocket program. The site has been used for NASA missions as well, including the launch of the Cassini-Huygens Saturn mission in 1997. Launch Complex 40 is being demolished and then refurbished to make way for the new SpaceX Falcon rocket launch facility. Now the gantry is rubble, the clean up operation can begin…

Debris of the launch gantry (Chris Miller/Spaceflight Now)

At 9am on Sunday, April 27th, 200 pounds of high explosives brought the Complex 40 mobile service tower crashing down. The tower was responsible for housing and preparing the highly successful Titan rockets for launch. Mainly used for military payloads, the Titan 4 series also sent the NASA Cassini probe on its way to Saturn on October 15th, 1997. A Titan 4 rocket was also used to send the ill-fated Mars Observer mission to the Red Planet on September 25th, 1992. Mission controllers lost contact with Observer when it was three days away from orbital insertion.

A Titan 4 rocket pre-launch as the gantry rolls back. The lightning protection system surrounds the pad (Justin Ray/Spaceflight Now)

The gantry weighed nearly 6500 tonnes and was installed with an advanced satellite processing clean room. The tower supported a total of 17 launches, deploying sophisticated surveillance and communication satellites for the US government. Two of these launches were devoted to the NASA interplanetary missions. The last Titan 4 was launched three years ago, handing heavy launch duties over to the modern Atlas 5 and Delta 4 rocket systems. In its glory days Titan 4 was the largest rocket available carrying the heaviest payloads into space.

View the complete series of images taken of the demolition of the Complex 40 tower »

Now that the tower has been removed, Space Exploration Technologies (SpaceX) can begin to set up the commercial launch site as the East Coast base of operations for its Falcon 9 rocket system which is currently under development. But why can’t the tower be renovated for SpaceX launches? The Falcon 9 rocket system will be assembled horizontally and rolled to the launch pad shortly before launch; the gantry is therefore superfluous to the company’s needs at Complex 40.

SpaceX Falcon 1 rocket system in 2004 (SpaceX)

However, not all the infrastructure of the site will be removed. The launch pad’s concrete deck and flame duct, water deluge system, electrical systems, lightning towers and instrumentation in the bay under the pad will be reused. The existing office space will also be renovated for SpaceX use. Since last October, SpaceX employees have been working at the site, removing any equipment not compatible with the Falcon system. The site will be up and running in time to begin supplying the International Space Station when NASA’s Space Shuttle fleet is retired in 2010. Complex 40 will live on, minus gantry, for NASA contracted launches and other commercial satellite orbital insertions by SpaceX.

Sources: SpaceX, Spaceflight Now

Supermassive Black Hole Kicked Out of Galaxy: First Ever Observation

Colliding galaxies can force the supermassive black holes in their cores together (NCSA)

For the first time, the most extreme collision to occur in the cosmos has been observed. Galaxies are known to hide supermassive black holes in their cores, and should the galaxies collide, tidal forces will cause massive disruption to the stars orbiting around the galactic cores. If the cores are massive enough, the supermassive black holes may become trapped in gravitational attraction. Do the black holes merge to form a super-supermassive black hole? Do the two supermassive black holes spin, recoil and then blast away from each other? Well, it would seem both are possible, but astronomers now have observational evidence of a black hole being blasted away from its parent galaxy after colliding with a larger cousin.

Most galaxies in the observable universe contain supermassive black holes in their cores. We know they are hiding inside galactic nuclei as they have a huge gravitational dominance over that region of space, sucking away at stars orbiting too close. Recent observations of galactic cores show quickly rotating stars around something invisible. Calculating the star orbital velocities, it has been deduced that the invisible body they are orbiting is something very massive; a supermassive black hole of hundreds of millions of solar masses. They are also the source of bright quasars in active, young galaxies.

Now, the same research group who made the astounding discovery of the structure of a black hole molecular torus by analysing the emission of echoed light from an X-ray flare (originating from star matter falling into the supermassive black hole’s accretion disk) have observed one of these supermassive black holes being kicked out of its parent galaxy. What caused this incredible event? A collision with another, bigger supermassive black hole.

A cartoon of a superkick (MPE/HST)

Stefanie Komossa and her team from the Max Planck Institute for extraterrestrial Physics (MPE) made the discovery. This work, to be published in Astrophysical Journal Letters on May 10th, verifies something that has only been modelled in computer simulations. Models predict that as two fast-rotating black holes begin to merge, gravitational radiation is emitted through the colliding galaxies. As the waves are emitted mainly in one direction, the black holes are thought to recoil – much like the force that accompanies firing a rifle. The situation can also be thought of as two spinning tops, getting closer and closer until they meet. Due to their high angular momentum, the tops experience a “kick”, very quickly ejecting the tops in the opposite directions. This is essentially what two supermassive black holes are thought to do, and now this recoil has been observed. What’s more, the ejected black hole’s velocity has been measured by analysing the broad spectroscopic emission lines of the hot gas surrounding the black hole (its accretion disk). The ejected black hole is travelling at a velocity of 2650 km/s (1647 mi/s). The accretion disk will continue to feed the recoiled black hole for many millions of years on its journey through space alone.

Supporting the evidence that this is indeed a recoiling supermassive black hole, Komossa analysed the parent galaxy and found hot gas emitting X-rays from the location where the black hole collision took place.

Now Komossa and her team hope to answer the questions this discovery has created: Did galaxies and black holes form and evolve jointly in the early Universe? Or was there a population of galaxies which had been deprived of their central black holes? And if so, how was the evolution of these galaxies different from that of galaxies that retained their black holes?

It is hoped that the combined efforts of observatories on Earth and in space may be used to find more of these “superkicks” and begin to answer these questions. The discovery of gravitational waves will also help, as this collision event is predicted to wash the Universe in powerful gravitational waves.

Source: MPE News

Global Warming is Accelerating Faster than can be Naturally Repaired

It appears the Earth’s climate has the ability to naturally regulate atmospheric carbon dioxide levels. Historic records extracted from ice cores show quantities of CO2 have varied widely in the last hundreds of thousands of years. This evidence appears to support the global warming critics view that current observations of the human-induced greenhouse effect is actually naturally occurring and the effects of carbon on the climate is over-hyped. However, a new study shows that although carbon dioxide levels may have been larger in the past, the Earth’s natural processes had time to react and counteract global warming. The current trend of industrial emissions has been far more accelerated than any historic natural process, natural climate “feedback loops” cannot catch up to remove CO2 from the atmosphere.

More bad news about the outlook for our climate I’m afraid. It would appear that the carbon dioxide emissions we have been generating since the Industrial Revolution have increased too rapidly for the Earth’s natural defences to catch up. This new finding comes from the analysis of bubbles of air trapped in ancient ice in Antarctica, dated to 610,000 years ago.

Long before man started burning coal and oil products, the Earth would naturally generate its own carbon emissions. The main polluters were volcanic eruptions, sending millions of tonnes of carbon dioxide into the atmosphere. Surely this had an effect on the state of the climate? Apparently so, but the increased levels of carbon dioxide produced by individual eruptions could be dealt with naturally over thousands of years. The climate wants to be in balance, should one quantity increase or decrease, other mechanisms are naturally triggered to bring the system back into equilibrium.

These mechanisms are known as “feedback loops”. Feedback loops are common in nature, should one quantity change, production of other quantities may speed up. In the case of the carbon emission from volcanic activity, levels of the stuff appear to have been controlled by a natural “negative feedback” loop (akin to a carbon thermostat, when carbon dioxide levels were too high, another process was triggered to remove the carbon dioxide from the atmosphere). However, the sustained atmospheric input of industrial burning of carbon dioxide by human activity has dwarfed historic volcanic carbon output, overwhelming any natural negative feedback mechanism.

This new study is published in the journal Nature Geoscience and carried out co-author Richard Zeebe. In an interview at the University of Hawaii, Zeebe comments on the climate’s ability to remove carbon dioxide from the atmosphere: “These feedbacks operate so slowly that they will not help us in terms of climate change […] that we’re going to see in the next several hundred years. Right now we have put the system entirely out of equilibrium.”

Zeebe and his team noticed that the levels of carbon dioxide and atmospheric temperature correlated, rising and falling together. “When the carbon dioxide was low, the temperature was low, and we had an ice age,” he said. His study states that in the last 600,000 years the carbon dioxide levels have fluctuated only by 22 parts per million. Since the 18th century, human activity has injected 100 parts per million. Humans have increased the quantity of carbon dioxide 14,000 times more than any natural process is capable of doing. This increase has negated any chance for the climate to naturally bring carbon dioxide levels back down to pre-industrial levels in the short term. If we were to stop all emissions tomorrow, it would take the planet hundreds of thousands of years to recover naturally.

Sadly, we’re not even close to slowing carbon emissions. Only last week, the US reported that carbon dioxide levels were up 2.4 parts per million during 2007 alone. The future is bleak for the planet balancing back into its prehistoric atmospheric carbon equilibrium…

Source: Reuters

Self-Healing Computers for Damaged Spaceships

View of the Westar 6 satellite while Dale Gardner retrieves it during STS-51-A in 1984 (NASA)

What happens when a robotic space probe breaks down millions of miles away from the nearest spacecraft engineer? If there is a software bug, engineers can sometimes correct the problem by uploading new commands, but what if the computer hardware fails? If the hardware is controlling something critical like the thrusters or communications system, there isn’t a lot mission control can do; the mission may be lost. Sometimes failed satellites can be recovered from orbit, but as there’s no interplanetary towing service for missions to Mars. Can anything be done for damaged computer systems far from home? The answer might lie in a project called “Scalable Self-Configurable Architecture for Reusable Space Systems”. But don’t worry, machines aren’t becoming self-aware, they’re just learning how to fix themselves…

When spacecraft malfunction on the way to their destinations, often there’s not a lot mission controllers can do. Of course, if they are within our reach (i.e. satellites in Earth orbit), there’s the possibility that they can be picked up by Space Shuttle crews or fixed in orbit. In 1984 for example, two malfunctioning satellites were picked up by Discovery on the STS-51A mission (pictured above). Both communications satellites had malfunctioning motors and couldn’t maintain their orbits. In 1993 Space Shuttle Endeavour (STS-61) carried out an orbital mirror-change on the Hubble Space Telescope. (Of course, there’s always the option that top secret dead spy satellites can be shot down too.)

Although both of the retrieve/repair mission examples above most likely involved mechanical failure, the same could have been done if their onboard computer systems failed (if it was worth the cost of an expensive manned repair mission). But what if one of the robotic missions beyond Earth orbit suffered a frustrating hardware malfunction? It needn’t be a huge error either (if it happened on Earth, the problem could probably be fixed quickly), but in space with no engineer present, this small error could spell doom for the mission.

So what’s the answer? Build a computer that can fix itself. It might sound like the Terminator 2 storyline, but researchers at the University of Arizona are investigating this possibility. NASA is funding the work and the Jet Propulsion Laboratory is taking them seriously.

Ali Akoglu (assistant professor in computer engineering) and his team are developing a hybrid hardware/software system that may be used by computers to heal themselves. The researchers are using Field Programmable Gate Arrays (FPGAs) to create self-healing processes at the chip-level.

FPGAs use a combination of hardware and software. Because some hardware functions are carried out at chip-level, the software acts as FPGA “firmware”. Firmware is a common computer term where specific software commands are embedded in a hardware device. Although the microprocessor processes firmware as it would any normal piece of software, this particular command is specific to that processor. In this respect, firmware mimics hardware processes. This is where Akoglu’s research comes in.

The researchers are in the second phase of the project called Scalable Self-Configurable Architecture for Reusable Space Systems (SCARS) and have set up five wireless networked units that could easily represent five cooperating rovers on Mars. When a hardware malfunction occurs, the networked “buddies” deal with the problem on two levels. First, the troubled unit attempts to repair the glitch at node level. By reconfiguring the firmware, the unit is effectively reconfiguring the circuit, bypassing the error. If it is unsuccessful, the unit’s buddies perform a back-up operation, reprogramming themselves to carry out the broken unit operations as well as their own. Unit-level intelligence is used in the first case, but should this fail, network-level intelligence is used. All the operations are performed automatically, there is no human intervention

This is some captivating research with far-reaching benefits. If computers could heal themselves at long-distance, millions of dollars would be saved. Also, the longevity of space missions may be extended. This research would also be valuable to future manned missions. Although the majority of computer issues can be fixed by astronauts, critical systems failures will occur; using a system such as SCARS could perform life-saving back up whilst the source of the problem is being found.

Source: UA News