Mars Mesas Stripped of Sand, Forming Dunes: Amazing Images from HiRISE

The mesa (left) and wind-blown sand features (right) (NASA)

There seems to be a never-ending flow of stunning images coming from the High Resolution Imaging Science Experiment (HiRISE) on board NASA’s Mars Reconnaissance Orbiter (MRO). In today’s high-resolution look at the Martian surface, large flat-topped hills (a.k.a. mesas) can be seen to be eroded by the Mars winds, stripping them of their material, creating sand dunes downwind. An incredible sight, it shows just how dynamic and powerful the Martian winds really are…

The down-wind slope of one of the eroded mesas, sand build-up obvious (NASA)

Imaged above the Hellespontus region of Mars, these fluid-like structures trailing across the surface are huge sand banks and sand dunes, built up after years of erosion from mesas upstream. The Mars winds have gradually stripped the large geological structures, allowing sand to build up as dunes in areas of calm. The curious crescent/droplet-shaped dune morphology indicates dominant winds blowing from west to east (left to right). As sand is carried from the mesa, it travels downstream. Where the winds begin to slack, possibly in large turbulent eddies; the suspended sand is dropped to allow dunes to grow.

False color close-up of two sand dunes. Wind flow from left to right (NASA)

The shapes of the Mars dunes bear a striking resemblance to barchan dunes, much like the ones found on Earth. The wind blows up the gentle slope of the dune, allowing sand to gradually build up. As the sand reaches a critical point, it collapses, forming a sharp slope on the downwind-facing side. Horn-like features are evident from above. In addition to the barchans, “seif”-like dunes are evident. Seifs are longitudinal stretches of sand parallel to wind direction. These are most obvious as they trail away from the mesas and stretch toward the clusters of barchan dunes.

See the entire region in a full-resolution projection.

The approximate size of the dunes (NASA)

These new images were captured on March 16th and resolve features to approximately 1.5 meters. At this level of resolution the small ripples in the wind blown sand can even be seen. To give an idea of scale, I’ve included a close up of one of the dunes. As annotated, the larger dunes are approximately 60 meters in length (from east to west) and around 40 meters in width.

Source: HiRISE

Russian Memorial for Space Dog Laika (Update)

Laika statue outside a research facility in Moscow (AP Photo/RIA-Novosti, Alexei Nikolsky)

On Friday Russian officials unveiled a monument to Laika, the pioneering dog that led the way to manned spaceflight on November 3rd, 1957. Her little memorial is a model dog standing atop a rocket near a military research facility in Moscow. When she made the historic flight into space on board Sputnik II, very little was known about the effects of launch and zero-gravity on an animal and Laika wasn’t thought to make it. Due to her being so small and hardy, she made it into orbit, but this was a one way ticket, she had no idea there would be no coming home… be warned, this isn’t a happy tale

The dogs chosen for the Russian space program were usually stray mongrels as it was believed they could survive and adapt in harsh conditions. Also, small dogs were chosen as they could fit into the capsule and were light for launch. Two year old Laika was apparently chosen from the animal shelter in Moscow for her good looks. After all, the first Russian into space would need to be photogenic. There was intense excitement about her selection for participation in the space race and she endeared herself to scientists and the public; she was described as “quiet and charming”.

Laika before launch in 1957 (NASA)

Unfortunately Laika’s trip was far from humane. She had to wait for three days before launch locked inside the capsule whilst technical problems with the launch were fixed. Operators had to keep her warm by pumping hot air into her cockpit as the temperatures around the launch pad were freezing. Once the launch was successful, doctors were able to keep track of her heartbeat and her blood pressure. The official story was that her heartbeat was fast at the launch, but she calmed down and was able to eat a specially prepared meal in orbit.

There are mixed reports about what happened next, but the official Soviet version was that Laika was able to live in space for a week, and then she was euthanized remotely. However, after the Soviet Union collapsed, reports from mission scientists suggested that she only lived for a couple of days and was put down, or (most likely) the cabin overheated soon after orbital insertion, killing her within hours.

Laika before launch in 1957 (AP Photo/NASA)

Interestingly, scientists did not announce that she was to die in orbit until after she was launched. Sputnik II was not equipped with a re-entry system and the craft burned up in the atmosphere after 2,570 orbits on April 14th, 1958.

It is easy for us to look back on Laika’s journey distastefully, but in the days of the Cold War, there was huge pressure on scientists to produce results in the Soviet Union and the USA. Sending dogs and other “guinea pigs” (I wonder, have any actual Guinea Pigs been sent into space?) into orbit was the most viable means to understand the effects of space travel. Regardless, she paved the way for other orbiting dogs (to be safely returned this time) and by 1961, enough data had been gathered to send the first man into space: Yuri Gagarin.

Original source: Associated Press

Shortest Single-Photon Pulse Generated: Implications for Quantum Communications in Space

Equipment used by Oxford scientists to produce the pulses (Oxford Uni.)

Scientists at Oxford University have developed a method to generate the shortest ever single-photon pulse by removing the interference of quantum entanglement. So how big are these tiny record-breakers? They are 20 microns long (or 0.00002 metres), with a period of 65 femtoseconds (65 millionths of a billionth of a second). This experiment smashes the previous record for the shortest single-photon pulse; the Oxford photon is 50 times shorter. While this sounds pretty cool, what is all the fuss about? How can these tiny electromagnetic wave-particles be of any use? In two words: quantum computing. And in an additional three words: quantum satellite communications

Quantum entanglement is a tough situation to put into words. In a nutshell: If a photon is absorbed by a type of material, two photons may be re-emitted. These two photons are of a lower energy than the original photon, but they are emitted from the same source and therefore entangled. This entangled pair is inextricably linked; regardless of the distance they are separated. Should the quantum state of one be changed, the other will experience that change. In theory, no matter how far away these photons are separated, the quantum change of one will communicated to the other instantly. Einstein called this quantum phenomenon “spooky action at a distance” and didn’t believe it possible, but experiment has proven otherwise.

The Oxford University experiment

So, in a recent publication, the Oxford group are trying to remove the entangled state of photons, this experiment isn’t about using this “spooky action”, it is to get rid of it. This is to remove the interference caused when one of the photon pair is detected. Once one of the twins is detected, the quantum state of the other is altered, contaminating the signal. If this effect can be removed, very short-period “pure” photons can be generated, heralding a new phase of quantum computing. If scientists have very definite, identical single photons at their disposal, highly accurate information can be carried with no interference from the quirky nature of quantum physics.

Our technique minimises the effects of this entanglement, enabling us to prepare single photons that are extremely consistent and, to our knowledge, have the shortest duration of any photon ever generated. Not only is this a fascinating insight into fundamental physics but the precise timing and consistent attributes of these photons also makes them perfect for building photonic quantum logic gates and conducting experiments requiring large numbers of single photons.” – Peter Mosley, Co-Investigator, Oxford University.

The Oxford University blog reporting this news highlights how useful these regimented photons will be to quantum computing, quantum communications in space could also be a major benefactor. Imagine sending pulses of quantum-identical photons through space, to satellites at first, later through interplanetary space. Space scientists will have an extremely powerful resource so data can be sent though the vacuum, encrypted in a small number of photons, indecipherable to everything other than its destination…

Source: University of Oxford

What Happens When Three Black Holes Collide?

A computer simulation of two black holes colliding, what happens if three collide? (credit: EU Training Network)

The consequences of two black holes colliding may be huge, the energy produced by such a collision could even be detected by observatories here on Earth. Ripples in space-time will wash over the Universe as gravitational waves and are predicted to be detected as they pass through the Solar System. Taking this idea one step further, what would happen if three black holes collide? Sound like science fiction? Well it’s not, and there is observational evidence that three black holes can cluster together, possibly colliding after some highly complex orbits that can only be calculated by the most powerful computers available to researchers…

Caltech/EPFL)
Back in January 2007, a quasar triplet was observed over 10 billion light years away. Quasars are generated by the supermassive black holes eating away at the core of active galaxies. Using the powerful W. M. Keck Observatory, researchers from Caltech were able to peer back in time (10 billion years) to see a period in the Universe’s life when active galaxies and black hole mergers would have been fairly common events (when compared to the calmer Universe of today). They observed three tightly packed quasars, an unprecedented discovery.

Now, scientists Manuela Campanelli, Carlos Lousto and Yosef Zlochower, all working at Rochester Institute of Technology’s Center for Computational Relativity and Gravitation, have simulated the highly complex mechanisms behind three interacting and merging supermassive black holes, much like the situation observed by Keck in 2007. The same group have worked on calculating the collision of two black holes before and have written a code that is powerful enough to simulate the collision of up to 22 black holes. However, 22 black holes probably wouldn’t collide naturally, this simply demonstrates the stability of the code, “Twenty-two is not going to happen in reality, but three or four can happen,” says Yosef Zlochower, an assistant professor. “We realized that the code itself really didn’t care how many black holes there were. As long as we could specify where they were located – and had enough computer power – we could track them.

These simulations are of paramount importance to the gravitational wave detectors such as the Laser Interferometer Gravitational-Wave Observatory (LIGO). So far there has been no firm evidence to come from these detectors, but more time is needed, the LIGO detector requires several years of “exposure time” to collect enough data and remove observational “noise”. But what do gravitational wave astronomers look for? This is the very reason many different cosmic scenarios are being simulated so the characteristics of events like two or three black holes mergers can be identified from their gravitational wave signature.

Gravitational wave astronomers “need to know what to look for in the data they acquire otherwise it will look like just noise. If you know what to look for you can confirm the existence of gravitational waves. That’s why they need all these theoretical predictions.” – Manuela Campanelli, director of RIT’s Center for Computational Relativity and Gravitation.

Source: RIT University News

Space Station Sacrifices Progress Module to Dump Trash into Pacific

Goodbye Progress 28 - the Russian supply vehicle begins its re-entry (credit: NASA TV)

After all the excitement about last week’s successful docking of the European ATV “Jules Verne”, it’s time to spare a thought for its Russian predecessor. The Progress 28 module was filled with rubbish and unneeded equipment, quietly severed from its docking bay and steered toward Earth. On Monday at 0850 GMT, the selfless module dropped through the atmosphere, burned and eventually reached the Pacific Ocean, sinking into the satellite graveyard 3000 km east of the New Zealand coast…

On February 5th, a Russian Soyuz rocket launched the Progress 28 cargo ship to the International Space Station (ISS) to ferry supplies to the astronauts in orbit. This mission started a very busy period for space traffic controllers. Soon after Progress 28 was sent on its way, Space Shuttle Atlantis blasted off to take the Columbus module to be installed on the station. Then at the start of this month, ESA’s Automated Transfer Vehicle (ATV) sat patiently in an orbital holding pattern until the shuttle undocked and flew back to Earth. Then on April 3rd, the ATV carried out a flawless approach and docking procedure with the ISS.

Watching over all this action on the station was the Progress 28 module attached patiently to the Russian-built Pirs docking compartment. After astronauts had salvaged reusable parts from the Progress module and filled it full of trash, the time came on April 7th to say Spokojnoj Nochi (Russian for “Good Night”) to the ill-fated supply ship to make room for the two Russians and one South Korean to arrive after the Soyuz launch yesterday.

Dropping supply modules into the Pacific may sound unsavoury, but it remains the only viable option to dispose of rubbish and unwanted material when in space. Simply jettisoning it into space cannot be done, there must be a controlled disposal, dumping trash into a used module and blasting it into a re-entry trajectory. Littering Earth orbit is a critical problem, so space agencies are doing the best they can to send potential debris to Earth where most of it can burn up in the atmosphere. Anything left over falls into a predetermined “satellite graveyard” in the worlds largest ocean.

NASA)

Some interesting objects have been dropped from the station into the atmosphere. To mention the most humorous, in 2006 the Russian crew on board the station stuffed an old spacesuit with rubbish and launched “Ivan Ivanovich” into orbit. Ivan lasted for 216 days and set a lifetime record for ISS space debris. The suit eventually succumbed to gravity and burned up in the atmosphere.

The drop zone for spaceship fragments, which did not burn in dense layers of the atmosphere, was located away from navigation routes, about 3,000 kilometers east of the New Zealand capital city of Wellington.” – Russia’s Federal Space Agency spokesperson Valery Lyndin.

Don’t think the sparkling new ATV is being let off either, in six months this hi-tech vehicle will be stuffed with garbage and thrown to a fiery death above the Pacific. Sad really…

Source: Space.com, New Scientist

Intel to Protect Microchips from Cosmic Rays

A simulation of the impact a cosmic ray has on entering the atmosphere (credit: AIRES package/Chicago University)

As computers become more advanced, the microprocessors inside them shrink in size and use less electrical current. These new, energy efficient chips can be crammed closer together, increasing the number of calculations that can be done per second, therefore making the computer more powerful. But even the mighty supercomputer has its Achilles heel: an increased sensitivity to interference from charged particles originating beyond your office. These highly energetic particles come from space and may cause critical hardware to miscalculate, possibly putting lives at risk.

Foreseeing this problem, microchip manufacturer Intel has begun devising ways to detect when a shower of charged particles may hit their chips, so when they do, calculations can be re-run to iron out any errors…

Cosmic rays originate from our Sun, supernovae and other unknown cosmic sources. Typically, they are very energetic protons that zip through space close to the speed of light. They could be so powerful that on impact with the upper atmosphere of the Earth it has been postulated that they may create micro black holes. Naturally these energetic particles can cause some damage. In fact, they may be a huge barrier to travelling beyond the safety of Earth’s magnetic field (the magnetosphere deflects most cosmic radiation, even astronauts in Earth orbit are well shielded), the health of astronauts will be severely damaged during prolonged interplanetary flight.

But what about on Earth, where we are protected from the full force of cosmic rays? Although a small portion of our annual radiation dose comes from cosmic rays (roughly 13%), they can have extensive effects over large volumes of the atmosphere. As cosmic rays collide with atmospheric molecules, a cascade of light particles is produced. This is known as an “air shower”. The billions of particles within the air shower from a single impact are often highly charged themselves (but of lesser energy than the parent cosmic ray), but the physics behind the air shower is beginning to grow in importance, especially in the realms of computing.

It seems computer microprocessor manufacturer Intel has been pondering the same question. They have just released a patent detailing their plans should a cosmic ray penetrate the atmosphere and hit one of their delicate microchips. The problem will come when computing becomes so advanced that the tiny chips may “misfire” when a comic ray impact event occurs. Should the unlucky chip be hit by a cosmic ray, a spike of electrical current may be exerted across the circuitry, causing a miscalculation.

This may sound pretty benign; after all, what’s one miscalculation in billions? Intel’s senior scientist Eric Hannah explains:

All our logic is based on charge, so it gets interference. […] You could be going down the autobahn [German freeway] at 200 miles an hour and suddenly discover your anti-lock braking system doesn’t work because it had a cosmic ray event.” – Eric Hannah.

After all, computers are getting smaller and cheaper, they are being used everywhere including critical systems like the braking system described by Hannah above. As they are so small, many more chips can occupy computers, increasing the risk. Where a basic, one processor computer may only experience one cosmic ray event in several years (producing an unnoticed calculation error), supercomputers with tens of thousands of processors may suffer 10-20 cosmic ray events per week. What’s more, in the near future even humble personal laptops may have the computing power of today’s supercomputer; 10-20 calculation errors per week would be unworkable, there would be too high a risk of data loss, software corruption or hardware failure.

Orbital space stations, satellites and interplanetary spacecraft also come to mind. Space technology embraces advanced computing as you get far more processing power in a smaller package, reducing weight, size and cost. What happens when a calculation error occurs when a cosmic ray hits a satellite’s circuitry? A single miscalculation could spell the satellite’s fate. I’d dread to think what could happen to future manned missions to the Moon, Mars and beyond.

It is hoped that Intel’s plan may be the answer to this ominous problem. They want to manufacture a cosmic ray event tracker that would detect a cosmic ray impact, and then instruct the processor to recalculate the previous calculations from the point before the cosmic ray struck. This way the error can be purged from the system before it becomes a problem.

There will of course be many technical difficulties to overcome before a fast detector is developed; in fact Eric Hannah admits that it will be hard to say when such a device may become a practical reality. Regardless, the problem has been identified and scientists are working on a solution, at least it’s a start…

Source: BBC

Meteorites Make a Big Splash on Mars: New Images of Secondary Craters by HiRISE

A few irregularly shaped craters from secondary, low energy impacts on the Mars surface (NASA)

They look like pockmarks caused by shrapnel from a huge explosion. Actually they are surface features on Mars as seen by the High Resolution Imaging Science Experiment (HiRISE) on board the Mars Reconnaissance Orbiter (MRO). But what are they? They’re not potholes formed by geological processes, they’re not openings to ancient lava tubes, they are impact craters… but not like any impact crater you’ve seen before…

The whole range of secondary impact craters in Chryse Planitia (NASA)

Most meteorite impact craters are roughly circular. If they are fairly new, ejected debris will be obvious emanating from the impact site. However, recent images by the HiRISE instrument appear to show tiny impact craters, in a swarm, each looking like they have been chiselled roughly out of the Martian regolith (pictured left).

The area of the image covers roughly 0.5×1.5 kilometres (25cm/pixel; features down 85cm can be resolved) of a large outflow channel in the Chryse Planitia region. The craters are actually secondary impact craters caused by large chunks of Martian rock being thrown up into the air after an energetic impact from a meteorite. To give an idea of size, the largest craters are about 40 meters across, a little smaller than an Olympic-sized swimming pool. It is not clear where the primary impact crater is in relation to the debris craters in the full-resolution image.

There appears to be dark material inside these small craters, possibly from the debris digging into layered deposits of different minerals just below the surface. Ripples of sand and dust are also evident. As these small craters are quite shallow, they will fill up and level out with wind-blown material quickly, so these secondary craters are fairly young when compared with geological timescales.

Source: HiRISE mission site

Did a Cooked Meteorite Seed Life on Earth?

Earth, four billion years ago, was a lifeless, hot and violent place. Not exactly a world where you’d expect life to form. But this was the scene where the first life-forming amino acids appeared. And how did this happen? According to new research, a lump of rock floating though space may have been irradiated by neutron star emissions, chemically altering amino acids hitching a ride on it. This rock then impacted the Earth and injected these altered chemicals into the desert wastes, possibly seeding the beginning of life on our planet… and this life was left-handed…

We’ve heard about this possibility before: life on Earth being seeded by some extra-terrestrial body, like a comet, meteorite or asteroid impact. In fact the prime reason for analysing comets and objects in the Oort Cloud (hovering around the outer reaches of our solar system) is to look for pre-life chemicals and organic compounds such as amino acids. Indeed, the discovery of organic compounds by the recent Cassini flyby of Saturn’s moon last month is another piece in the puzzle toward understanding the extent of life-giving chemicals on planets other than Earth.

Amino acids are a type of protein found in all life forms on Earth, it seems reasonable to search for the presence of these proteins on bodies other than our planet, possibly giving us more information about how life formed and where life came from.

There are two forms of amino acids, one left-handed and one right-handed, giving an indication of the orientation of the acid. For life to be seeded, these proteins must contain only one “chirality” (i.e. either left or right), it’s no good having mixed chirality.

At the 235th national meeting of the American Chemical Society, new research has been presented describing how our amino acid signature may have come from outer space. Ronald Breslow, a university professor at Columbia University points out that the vast majority of life on Earth has a “left chirality”. And his reason for this? The polarized light emitted from neutron stars many billions of years ago irradiated rocky bodies, with amino acid compounds on their surface, selectively destroying most of the “right-hand” acids. Although this theory may sound outlandish, it does give a possible reason for the prevalence of left-hand proteins in amino acids on Earth.

The irradiated meteorite will have impacted the Earth carrying amino acids with a dominance of left chirality which dropped into the “primordial soup”, evolving into the first forms of life. All life as we know it will have the same chirality as this soup of pre-biotic life.

These meteorites were bringing in what I call the ‘seeds of chirality’ […] If you have a universe that was just the mirror image of the one we know about, then in fact, presumably it would have right-handed amino acids. That’s why I’m only half kidding when I say there is a guy on the other side of the universe with his heart on the right hand side.” – Ronald Breslow.

Breslow and his team simulated the events after such a meteorite hit the surface. As the left-hand dominated amino acids from space combined with existing amino acids (of mixed chirality) on Earth, the desert-like temperatures and a dash of water amplified the left-hand proteins, giving them dominance, thus sparking the basic building blocks of life. He argues that these acids were most likely brought to Earth via meteorite, and not chemically altered by radiation in-situ, “…the evidence that these materials are being formed out there and brought to us on meteorites is overwhelming,” said Breslow.

Source: Physorg.com

Supernova Precursor Discovered in Spiral Galaxy NGC 2397

It’s a bit like trying to find a needle in a haystack when looking for a star in a galaxy. Although hard to do, astronomers using images from the Hubble Space Telescope (HST) are doing just that, trying to find stars before they explode as supernovae. In 2006, supernova SN 2006bc was spotted in spiral galaxy NGC 2397, so astronomers got to work, sifting through previous images taken by the HST. They found that star, in the rising stage of brightness as it exploded. Usually we don’t get to see this stage of a supernova, as we can’t predict which star is going to blow. But retracing years of HST observational data, scientists are able to piece together the cosmic forensic evidence and see the star before it died…

SN 2006bc was seen in the spiral galaxy NGC 2397, located nearly 60 million light years from the Milky Way, back in 2006. There was no warning or any indication that that star was going to blow in that galaxy (after all, there’s a lot out there), but Hubble’s Advanced Camera for Surveys (ACS) captured the galaxy after it happened. So astronomers watched the afterglow of the event. While a lot of good science can be done by analysing the remnants of a supernova, wouldn’t it be great to see a star before it explodes? Perhaps then we can analyse the emissions from an unstable star before it dies…

Predicting cosmic events is no new thing, and much effort is being put into various forecasting techniques. A few examples include:

  • Solar radiation: The main focus for solar physicists is to predict “space weather” to help protect us against the dangerous onslaught of high energy particles (particularly solar flares).
  • Detecting supernova neutrinos: An “early warning” system is already in place to detect the neutrinos that are blasted from a star’s core at the moment of a star’s collapse (leading to a supernova). The SuperNova Early Warning System (SNEWS) has been set up to detect these neutrinos.
  • Gamma ray bursts (GRBs): The Polish “Pi of the Sky” GRB detector is an array of cameras looking out for optical flashes (or transients) in the night sky above the Chilean mountains. Combined with NASA’s Swift gamma ray observatory in orbit, the burst is detected, immediately signalling other observatories to watch the event.

The above examples usually detect the sudden event of a solar flare, GRB or surge of neutrinos right at the point of initiation. Fortunately for solar physicists, we have a vast amount of high-spatial and high-temporal resolution data about our closest star. Should a flare be launched, we can “rewind the tape” and see the location of flare initiation and work out the conditions before the flare was launched. From this, we are able to be better informed and possibly predict where the next flare will be launched from. Supernova astronomers aren’t so lucky. The cosmos is a big place after all, only a tiny proportion of the night sky has been observed in any great detail, and the chances that the same region has been imaged more than once at high resolution are few and far between.

Although the chances are slim, researchers from Queen’s University Belfast in Northern Ireland, led by Professor Stephen J. Smartt used Hubble Space Telescope (HST) images to “rewind the tape” before supernova SN 2006bc occurred. By confining their search for “pre-supernova” stars in local galaxies, there was a better chance of studying galaxies that have been imaged at high resolution and imaged more than once in the past. SN 2006bc turned out to be the perfect candidate.

The group has done this before. Of the six precursor stars discovered to date, Smartt’s team found five of them. From their analysis, it is hoped that the characteristics of a star before it dies can be worked out as the conditions for a supernova to occur is poorly understood.

After ten years of surveying, the group presented their discoveries of supernova precursor stars at this year’s National Astronomy Meeting 2008 in Belfast, last week. It appears that stars with masses as low as seven times the size of our Sun can explode as supernovae. They go on to hypothesise that the massive stars may not explode as supernovae and may just die through collapse and form as a black hole. The emission from such an event may be too faint to observe and the most energetic supernovae may be restricted to the smaller stars.

However, six supernova precursor stars are not a large number to make any big conclusions quite yet, but it is a big step in the right direction to better understand the mechanisms at work in a star just about to explode…

Source: ESA

Star Formation Extinguished by Quasars

agn_m87_jet.thumbnail.jpg

According to new research, a galaxy with a quasar in the middle is not a good place to grow up. As active galactic nuclei (AGN) evolve, they pass through a “quasar phase”, where the accretion disk surrounding the central black hole blasts intense radiation into space. The quasar far outshines the entire host galaxy. After the quasar phase, when the party is over, it is as if there is no energy left and star formation stops.

AGN are the compact, active and bright central cores to active galaxies. The intense brightness from these active galactic cores is produced by the gravitationally driven accretion disk of hot matter spinning and falling into a supermassive black hole at the centre. During the lifetime of an AGN, the black hole/accretion disk combo will undergo a “quasar phase” where intense radiation is blasted from the superheated gases surrounding the black hole. Typically quasars are formed in young galaxies.
Multicolour SDSS optical images of NGC5806 and NGC5750, nearby spiral galaxies with active nuclei similar to those being studied by Westoby and his collaborators. Image credit: The Sloan Digital Sky Survey
Although the quasar phase is highly energetic and tied with young galaxy formation, according to new results from the Sloan Digital Sky Survey, it also marks the end for any further star birth in the galaxy. These findings will be presented today (Friday 4th April) at the RAS National Astronomy Meeting in Belfast, Northern Ireland, by Paul Westoby having just completed a study of 360 000 galaxies in the local Universe. He carried out this research with Carole Mundell and Ivan Baldry from the Astrophysics Research Institute of Liverpool, John Moores University, UK. This study was proposed to understand the relationship between accreting black holes, the birth of stars in galactic cores and the evolution of galaxies as a whole. The results are astonishingly detailed.

By analysing so many galaxies, quite a detailed picture emerges. The primary result to come from this shows that as a young galactic core is dominated by a highly energetic quasar, star formation stops. After this phase in a galaxy’s life, star formation is not possible; the remaining stars are left to evolve by themselves.

An artists impression of a quasar (credit: NASA)

It is believed that all AGNs go through the quasar phase in their early galactic lives. It is also thought that most massive galaxies will have a supermassive black hole hiding inside their galactic cores passively, having already gone through the quasar phase. Westoby notes that some dormant supermassive black holes can be “reignited” into a secondary quasar phase, but the mechanisms behind this are sketchy.

The starlight from the host galaxy can tell us much about how the galaxy has evolved […] Galaxies can be grouped into two simple colour families: the blue sequence, which are young, hotbeds of star-formation and the red sequence, which are massive, cool and passively evolving..” – Paul Westoby.

It is found there is a sudden cut-off point for star formation, and this occurs right after the quasar phase. After the quasar phase, the AGN relaxes into a quieter state, there is no star formation and gradual evolution of stars progresses into the “red sequence” of star evolution.

Other findings include the indication that regardless of the size of the galaxy, it is the shape of the galactic “bulge” that matters. Without a large classical bulge in the centre, supermassive black holes that drive the AGN are not possible. Therefore, only galaxies with a bulge have AGN at the core. Another factor affecting supermassive black hole formation is the density of galaxies in a volume of space. Should there be too many, supermassive black holes become a scarcity.

Source: The RAS National Astronomy Meeting 2008