What’s Up This Week – November 21 – November 27, 2005

M2: Doug Williams – REU Program – NOAO/AURA/NSF
Monday, November 21 – Tonight let’s start with a wonderful globular cluster that gives the very best of all worlds – one that can be seen in even the smallest of binoculars and from both hemispheres! Your destination is about one-third the distance between Beta Aquarii and Epsilon Pegasi…

First recorded by Maraldi in 1746 and cataloged by Messier in 1760, the 6.0 magnitude M2 is one of the finest and brightest of Class II globular clusters. At 13 billion years old, this rich galactic globular is one of the oldest in our galaxy and its position in the halo puts it directly beneath the Milky Way’s southern pole. In excess of 100,000 faint stars form a well concentrated sphere which spans across 150 light-years – one that shows easily to any optical aid. While only larger scopes will begin resolution on this awesome cluster’s yellow and red giants, we’d do well to remember that our own Sun would be around the 21st magnitude if it were as distant as this ball of stars!

Tuesday, November 22 – With the Moon comfortably out of the way this evening, let’s head for the constellation of Cetus and a dual study as we conquer NGC 246 and NGC 255.

Located about four finger widths north of bright Beta Ceti – and triangulating south with Phi 1, 2 and 3 – is our first mark. NGC 246 is an 8.0 magnitude planetary nebula which will appear as a slightly out-of-focus star in binoculars, but a whole lot like a Messier object to a small scope. Here is the southern sky’s version of the “Ring Nebula.” While this one is actually a bit brighter than its M57 counterpart, larger scopes might find its 12.0 magnitude central star just a little bit easier to resolve.

If you are using large aperture, head just a breath northwest and see if you can capture small and faint galaxy NGC 255. This 12.8 magnitude spiral will show a very round signature with a gradual brightening towards the nucleus and a hint of outer spiral arms.

Wednesday, November 23 – Tonight in 1885, the very first photograph of a meteor shower was taken. Also, weather satellite Tiros II was launched on this day in 1960. Carried to orbit by a three-stage Delta rocket, the “Television Infrared Observation Satellite” was about the size of a barrel, testing experimental television techniques and infrared equipment. Operating for 376 days, Tiros II sent back thousands of pictures of Earth’s cloud cover and was successful in its experiments to control orientation of satellite spin and infrared sensors. Oddly enough, on this day in 1977 a similar mission – Meteosat 1- became the first satellite put into orbit by the European Space Agency. Where is all this leading? Why not try observing satellites on your own! Thanks to wonderful on-line tools like Heavens-Above, you’ll be “in the know” whenever a bright satellite makes a pass for your specific area. It’s fun!

Thursday, November 24 – Tonight let’s return to Cassiopeia and start first by exploring the central most bright star – Gamma. At approximately 100 light-years away, Gamma is very unusual. Once thought to be a variable, this particular star has been known to go through some very radical changes in its temperature, spectrum, magnitude, color and even diameter! Gamma is also a visual double star, but the 11th magnitude companion is highly difficult to perceive so close (2.3″) to the primary.

Four degrees southeast of Gamma is our marker for this starhop, Phi Cassiopeiae. By aiming binoculars or telescopes at this star, it is very easy to locate an interesting open cluster – NGC 457 – in the same field of view. This bright and splendid galactic cluster has received a variety of names over the years because of its uncanny resemblance to a figure.

Both Phi and HD 7902 may not be true members of the cluster. If magnitude 5 Phi were actually part of this grouping, it would have to have a distance of approximately 9300 light-years, making it the most luminous star in the sky! The fainter members of NGC 457 comprise a relatively “young” star cluster that spans about 30 light-years across. Most of the stars are only about 10 million years old, yet there is an 8.6 magnitude red supergiant in the center.

Friday, November 25 – If you live in the northeastern United States or Canada, it would be worth getting up early this morning as the Moon occults bright Sigma Leonis. Be sure to check IOTA for times and locations near you!

Tonight we’re heading south and our goal will be about two finger widths north-northwest of Alpha Phoenicis.

At magnitude 7.8, this huge member of the Sculptor Galaxy Group, known as NGC 55, will be a treat to both binoculars and telescopes. Somewhat similar to the large Magellanic Cloud in structure, those in the southern hemisphere will have an opportunity to see a galaxy very similar in appearance to M82 – but on a much grander scale! Larger scopes will show mottling in structure, resolution of individual stars and nebulous areas, as well as a very prominent central dark dust cloud. Like its northern counterpart, both the Ursa Major and Sculptor Group are around the same distance from our own Local Group.

Saturday, November 26 – This morning it is our Russian comrades’ turn as the Moon occults Beta Virginis. As always, times and locations can be found on the IOTA website! Today also marks the launch of the first French satellite – Asterix 1.

It’s time to head north again as we turn our eyes towards 1000 light-year distant Delta Cephei, one of the most famous of all variables. It is an example of a “pulsing variable” – one whose magnitude changes are not attributed to an eclipsing companion, but to the expansion and contraction of the star itself. For unaided observers, discover what John Goodricke did in 1784… You can follow its near one magnitude variability by comparing it to nearby Epsilon and Zeta. It rises to maximum in about a day and a half, yet the fall takes about four days.

Let’s travel about a finger width southeast of Delta Cephei for new open cluster NGC 7380. This large gathering of stars has a combined magnitude of 7.2. Like many young clusters, it is embroiled in faint nebulosity. Surrounded by a dispersed group of brighter stars, the cluster itself can be seen in binoculars and may resolve around three dozen faint members to mid-aperture.

Before you leave, return to Delta Cephei and take a closer look. It is also a well-known double star that was measured by F.G.W. Struve in 1835. Its 6.3 magnitude companion has not shown change in position or separation angle in the 171 years since Struve looked at it – and as we see it now. Chances are, this means the two are probably not a physical pair. S.W. Burnham discovered a third, 13th magnitude companion in 1878. Enjoy the color contrast between its members!

Sunday, November 27 – Tonight let’s use binoculars or small scopes to go northern and southern “cluster hunting.”

The first destination is NGC 7654, but you’ll find it more easily by its common name of M52. To find it with binoculars, draw a mental line between Alpha and Beta Cassiopeiae and extend it about the same distance along the same trajectory. This mixed magnitude cluster is bright and easy.

The next, NGC 129, is located almost directly between Gamma and Beta. This is also a large, bright cluster that resolves in a small scope but shows at least a dozen of its 35 members to binoculars. Near the cluster’s center and north of a pair of matched magnitude stars is Cepheid variable DI Cassiopeiae – which changes by about a magnitude in a period of a week.

Now head for northeastern Epsilon Cassiopeiae and hop about three finger widths to the east-southeast. Here you will find 3300 light-year distant NGC 1027. As an attractive “starry swatch” in binoculars, small scopes will have a wonderful time resolving its 40 or more faint members.
If you live south, have a look at open cluster IC 2602. This very bright, mixed magnitude cluster includes 3.0 magnitude Theta Carinae. Seen easily unaided, this awesome open cluster contains around 30 stars. Be sure to look on its southwest edge with binoculars or scopes for a smaller feature overshadowed by the grander companions. Tiny by comparison, Melotte 101 will appear like a misty patch of faint stars. Enjoy!

Until next week… May all your journeys be at light speed! ~Tammy Plotner

Why is Moondust So Clingy?

A single grain of moondust hangs suspended in Abba’s vacuum chamber. Image credit: NASA Click to enlarge
Each morning, Mian Abbas enters his laboratory and sits down to examine–a single mote of dust. Zen-like, he studies the same speck suspended inside a basketball-sized vacuum chamber for as long as 10 to 12 days.

The microscopic object of his rapt attention is not just any old dust particle. It’s moondust. One by one, Abbas is measuring properties of individual dust grains returned by Apollo 17 astronauts in 1972 and the Russian Luna-24 sample-return spacecraft that landed on the Moon in 1976.

“Experiments on single grains are helping us understand some of the strange and complex properties of moondust,” says Abbas. This knowledge is important. According to NASA’s Vision for Space Exploration, astronauts will be back on the moon by 2018–and they’ll have to deal with lots of moondust.

The dozen Apollo astronauts who walked on the moon between 1969 and 1972 were all surprised by how “sticky” moondust was. Dust got on everything, fouling tools and spacesuits. Equipment blackened by dust absorbed sunlight and tended to overheat. It was a real problem.

Many researchers believe that moondust has a severe case of static cling: it’s electrically charged. In the lunar daytime, intense ultraviolet (UV) light from the sun knocks electrons out of the powdery grit. Dust grains on the moon’s daylit surface thus become positively charged.

Eventually, the repulsive charges become so strong that grains are launched off the surface “like cannonballs,” says Abbas, arcing kilometers above the moon until gravity makes them fall back again to the ground. The moon may have a virtual atmosphere of this flying dust, sticking to astronauts from above and below.

Or so the theory goes.

But do grains of lunar dust truly become positively charged when illuminated by ultraviolet light? If so, which grains are most affected–big grains or little grains? And what does moondust do when it’s charged?

These are questions Abbas is investigating in his “Dusty Plasma Laboratory” at the National Space Science and Technology Center in Huntsville, Alabama. Along with colleagues Paul Craven and doctoral student Dragana Tankosic, Abbas injects a single grain of lunar dust into a chamber and “catches” it using electric force fields. (The injector gives the grain a slight charge, allowing it to be handled by electric fields.) With the grain held suspended literally in mid-air, they “pump the chamber down to 10-5 torr to simulate lunar vacuum.”

Next comes the mesmerizing part: Abbas shines a UV laser on the grain. As expected, the dust gets “charged up” and it starts to move. By adjusting the chamber’s electric fields with painstaking care, Abbas can keep the grain centered; he can measure its changing charge and explore its fascinating characteristics.

Like the Apollo astronauts, Abbas has already discovered some surprises–even though his experiment is not yet half done.

“We’ve found two things,” says Abbas. “First, ultraviolet light charges moondust 10 times more than theory predicts. Second, bigger grains (1 to 2 micrometers across) charge up more than smaller grains (0.5 micrometer), just the opposite of what theory predicts.”

Clearly, there’s much to learn. For instance, what happens at night, when the sun sets and the UV light goes away?

That’s the second half of Abbas’s experiment, which he hopes to run in early 2006. Instead of shining a UV laser onto an individual lunar particle, he plans to bombard dust with a beam of electrons from an electron gun. Why electrons? Theory predicts that lunar dust may acquire negative charge at night, because it is bombarded by free electrons in the solar wind–that is, particles streaming from the sun that curve around behind the moon and hit the night-dark soil.

When Apollo astronauts visited the Moon 30+ years ago, they landed in daylight and departed before sunset. They never stayed the night, so what happened to moondust after dark didn’t matter. This will change: The next generation of explorers will remain much longer than Apollo astronauts did, eventually setting up a permanant outpost. They’ll need to know, how does moondust behave around the clock?

Stay tuned for answers from the Dusty Plasma Lab.

Original Source: NASA News Release

Radiation Resistant Computers

EAFTC computers in a space-ready flight chassis. Image credit: NASA/Honeywell. Click to enlarge
Unfortunately, the radiation that pervades space can trigger such glitches. When high-speed particles, such as cosmic rays, collide with the microscopic circuitry of computer chips, they can cause chips to make errors. If those errors send the spacecraft flying off in the wrong direction or disrupt the life-support system, it could be bad news.

To ensure safety, most space missions use radiation hardened computer chips. “Rad-hard” chips are unlike ordinary chips in many ways. For example, they contain extra transistors that take more energy to switch on and off. Cosmic rays can’t trigger them so easily. Rad-hard chips continue to do accurate calculations when ordinary chips might “glitch.”

NASA relies almost exclusively on these extra-durable chips to make computers space-worthy. But these custom-made chips have some downsides: They’re expensive, power hungry, and slow — as much as 10 times slower than an equivalent CPU in a modern consumer desktop PC.

With NASA sending people back to the moon and on to Mars–see the Vision for Space Exploration–mission planners would love to give their spacecraft more computing horsepower.

Having more computing power onboard would help spacecraft conserve one of their most limited resources: bandwidth. The bandwidth available for beaming data back to Earth is often a bottleneck, with transmission speeds even slower than old dial-up modems. If the reams of raw data gathered by the spacecraft’s sensors could be “crunched” onboard, scientists could beam back just the results, which would take much less bandwidth.

On the surface of the moon or Mars, explorers could use fast computers to analyze their data right after collecting it, quickly identifying areas of high scientific interest and perhaps gathering more data before a fleeting opportunity passes. Rovers would benefit, too, from the extra intelligence of modern CPUs.

Using the same inexpensive, powerful Pentium and PowerPC chips found in consumer PCs would help tremendously, but to do so, the problem of radiation-induced errors must be solved.

This is where a NASA project called Environmentally Adaptive Fault-Tolerant Computing (EAFTC) comes in. Researchers working on the project are experimenting with ways to use consumer CPUs in space missions. They’re particularly interested in “single event upsets,” the most common kind of glitches caused by single particles of radiation barreling into chips.

Team member Raphael Some of JPL explains: “One way to use faster, consumer CPUs in space is simply to have three times as many CPUs as you need: The three CPUs perform the same calculation and vote on the result. If one of the CPUs makes a radiation-induced error, the other two will still agree, thus winning the vote and giving the correct result.”

This works, but often it’s overkill, wasting precious electricity and computing power to triple-check calculations that aren’t critical.

“To do this smarter and more efficiently, we’re developing software that weighs the importance of a calculation,” continues Some. “If it’s very important, like navigation, all three CPUs must vote. If it’s less important, like measuring the chemical makeup of a rock, only one or two CPUs might be involved.”

This is just one of dozens of error-correction techniques that EAFTC pulls together into a single package. The result is much better efficiency: Without the EAFTC software, a computer based on consumer CPUs needs 100-200% redundancy to protect against radiation-caused errors. (100% redundancy means 2 CPUs; 200% means 3 CPUs.) With EAFTC, only 15-20% redundancy is needed for the same degree of protection. All of that saved CPU time can be used productively instead.

“EAFTC is not going to replace rad-hard CPUs,” cautions Some. “Some tasks, such as life support, are so important we’ll always want radiation hardened chips to run them.” But, in due course, EAFTC algorithms might take some of the data-processing load off those chips, making vastly greater computer power available to future missions.

EAFTC’s first test will be onboard a satellite called Space Technology 8 (ST-8). Part of NASA’s New Millennium Program, ST-8 will flight-test new, experimental space technologies such as EAFTC, making it possible to use them in future missions with greater confidence.
The satellite, scheduled for a 2009 launch, will skim the Van Allen radiation belts during each of its elliptical orbits, testing EAFTC in this high-radiation environment similar to deep space.

If all goes well, space probes venturing across the solar system may soon be using the exact same chips found in your desktop PC — just without the glitches.

Original Source: NASA News Release

Early Earth Wasn’t So Hellish

The Earth. Image credit: NASA. Click to enlarge
New ANU research is set to radically overturn the conventional wisdom that early Earth was a hellish planet barren of continents.

An international research team led by Professor Mark Harrison of the Research School of Earth Sciences analysed unique 4 to 4.35 billion-year-old minerals from outback Australia and found evidence that a fringe theory detailing the development of continents during the first 500 million years of Earth history – the Hadean (“hellish”) Eon – is likely to be correct.

The research, published in the latest edition of Science, follows on from results by Professor Harrison and his colleagues published earlier this year that confirmed that our planet was also likely to have had oceans during most of the Hadean.

“A new picture of early Earth is emerging,” Professor Harrison said. “We have evidence that the Earth’s early surface supported water – the key ingredient in making our planet habitable. We have evidence that this water interacted with continent-forming magmas throughout the Hadean.

“And now we have evidence that massive amounts of continental crust were produced almost immediately upon Earth formation. The Hadean Earth may have looked much like it does today rather than our imagined view of a desiccated world devoid of continents.”

Professor Harrison and his team gathered their evidence from zircons, the oldest known minerals on Earth, called zircons. These ancient grains, typically about the width of a human hair, are found only in the Murchison region of Western Australia. The team analysed the isotopic properties of the element hafnium in about 100 tiny zircons that are as old as 4.35 billion years.

Conventionally, it has been believed that the Earth’s continents developed slowly over a long period of time beginning about 4 billion years ago – or 500 million years after the planet formed.

However, hafnium isotope variations produced by the radioactive decay of an isotope of lutetium indicate many of these ancient zircons formed in a continental setting within about 100 million years of Earth formation.

“The evidence points to almost immediate development of continent followed by its rapid recycling back into the mantle via a process akin to modern plate tectonics,” according to Professor Harrison.

The isotopic imprint left on the mantle by early melting shows up again in younger zircons – providing evidence that they have tapped the same source. This suggests that the amount of mantle processed to make continent must have been enormous.

“The results are consistent with the Earth hosting a similar mass of continental crust as the present day at 4.5-4.4 billion years.

“This is a radical departure from conventional wisdom regarding the Hadean Earth,” said Professor Harrison.

“But these ancient zircons represent the only geological record we have for that period of Earth history and thus the stories they tell take precedence over myths that arose in the absence of observational evidence.”

“The simplest explanation of all the evidence is that essentially from its formation, the planet fell into a dynamic regime that has persisted to the present day.”

Original Source: ANU News Release

More Einstein Rings Discovered

Einstein ring gravitational lens: SDSS J163028.15+452036.2. Image credit: Hubble. Click to enlarge
As Albert Einstein developed his theory of general relativity nearly a century ago, he proposed that the gravitational field from massive objects could dramatically warp space and deflect light.

The optical illusion created by this effect is called gravitational lensing. It is nature’s equivalent of having a giant magnifying lens in space that distorts and amplifies the light of more distant objects. Einstein described gravitational lensing in a paper published in 1936. But he thought the effect was unobservable because the optical distortions produced by foreground stars warping space would be too small to ever be measurable by the largest telescopes of his time.

Now, almost a century later, astronomers have combined two powerful astronomical assets, the Sloan Digital Sky Survey (SDSS) and NASA’s Hubble Space Telescope, to identify 19 new “gravitationally lensed” galaxies, adding significantly to the approximately 100 gravitational lenses previously known. Among these 19, they have found eight new so-called “Einstein rings”, which are perhaps the most elegant manifestation of the lensing phenomenon. Only three such rings had previously been seen in visible light.

In gravitational lensing, light from distant galaxies can be deflected on its way to Earth by the gravitational field of any massive object that lies in the way. Because of this, we see the galaxy distorted into an arc or multiple separate images. When both galaxies are exactly lined up, the light forms a bull’s-eye pattern, called an Einstein ring, around the foreground galaxy.

The newly discovered lenses come from an ongoing project called the Sloan Lens ACS Survey (SLACS). A team of astronomers, led by Adam Bolton of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and Leon Koopmans of the Kapteyn Astronomical Institute in the Netherlands, selected the candidate lenses from among several hundred thousand optical spectra of elliptical galaxies in the Sloan Digital Sky Survey. They then used the sharp eyes of Hubble’s Advanced Camera for Surveys to make the confirmation.

“The massive scale of the SDSS, together with the imaging quality of the Hubble telescope, has opened up this unprecedented opportunity for the discovery of new gravitational lenses,” Bolton explained. “We’ve succeeded in identifying the one out of every 1,000 galaxies that show these signs of gravitational lensing of another galaxy.”

The SLACS team scanned the spectra of approximately 200,000 galaxies 2 to 4 billion light-years away. The team was looking for clear evidence of emission from galaxies twice as far from Earth and directly behind the closer galaxies. They then used Hubble’s Advanced Camera for Surveys to snap images of 28 of these candidate lensing galaxies. By studying the arcs and rings produced by 19 of these candidates, the astronomers can precisely measure the mass of the foreground galaxies.

Besides producing odd shapes, gravitational lensing gives astronomers the most direct probe of the distribution of dark matter in elliptical galaxies. Dark matter is an invisible and exotic form of matter that has not yet been directly observed. Astronomers infer its existence by measuring its gravitational influence. Dark matter is pervasive within galaxies and makes up most of the total mass of the universe. By searching for dark matter in galaxies, astronomers hope to gain insight into galaxy formation, which must have started around lumpy concentrations of dark matter in the early universe.

“Our results indicate that, on average, these ‘elliptical lensing galaxies’ have the same special mass-density structure as that observed in spiral galaxies,” Bolton continued. “This corresponds to an increase in the proportion of dark matter relative to stars as one moves away from the center of the lensing galaxy and into its fainter outskirts. And since these lensing gelaxies are relatively bright, we can solidify this result with further ground-based spectroscopic observations of the stellar motions in the lenses.”

“Being able to study these and other gravitational lenses as far back in time as several billion years allows us to see directly whether the distribution of dark [invisible] and visible mass changes with cosmic time,” Dr. Koopmans added. “With this information, we can test the commonly held idea that galaxies form from collision and mergers of smaller galaxies.”

The Sloan Digital Sky Survey, from which the SLACS lens-candidate sample was selected, was begun in 1998 with a custom-built ground-based telescope to measure the colors and brightnesses of more than 100 million objects over a quarter of the sky and map the distances to a million galaxies and quasars. “This type of gravitational-lens survey was not an original goal of the SDSS, but was made possible by the excellent quality of the SDSS data,” said Scott Burles of the Massachusetts Institute of Technology in Cambridge, Mass., a SLACS team member and one of the creators of the SDSS.

“An additional bonus of the large size of the SDSS database is that we can design our search criteria so as to find the lenses that are most suitable for specific science goals,” said SLACS team member Tommaso Treu of the University of California, Santa Barbara. “Whereas until now we have selected the largest galaxies as our targets, in the next stages of the survey we are targeting smaller lens galaxies. There have been suggestions that the structure of galaxies changes with galaxy size. By identifying these rare objects ‘on demand,’ we will soon be able for the first time to test whether this is true.”

Added SLACS team member Leonidas Moustakas of the NASA Jet Propulsion Laboratory and the California Institute of Technology in Pasadena, Calif.: “These Einstein rings also give an unrivaled magnified view of the lensed galaxies, allowing us to study the stars and the formation histories of these distant galaxies.”

The SLACS Survey is continuing, and so far the team has used Hubble to study almost 50 of their candidate lensing galaxies. The eventual total is expected to be more than 100, with many more new lenses among them. The initial findings of the survey will appear in the February 2006 issue of the Astrophysical Journal and in two other papers that have been submitted to that journal.

Original Source: Hubblesite News Release

Mars Express Radar Data is Coming In

Artist’s impression of MARSIS deployment complete. Image credit: ESA. Click to enlarge
The Mars Express radar, MARSIS, has now been deployed for more than four months. Here we report on the activities so far.

For the operational period up to now, Mars Express has been making its closest approaches to Mars predominantly in the daytime portion of its orbit. The MARSIS radar’s scientists are mainly collecting data about the upper layers of the Martian atmosphere, or “ionosphere”, which is the highly electrically conducting layer that is maintained by sunlight.

They are also continuing the laborious analysis of all data gathered during the first night-time observations last summer, especially in the search for and interpretation of possible signals from subsurface layers. This includes the search for a possible signature of underground water, in frozen or liquid state.

Radar science is a complex business – it is based on the detection of radio waves reflected by boundaries between different materials. By analysis of these “echoes”, it is possible to deduce information about the kind of material causing the reflection, such as estimates of its composition and physical state.

Different materials are characterised by their “dielectric constant”, that is the specific way they interact with electromagnetic radiation, such as radio waves. When a radio wave crosses the boundary of different layers of “material”, an echo is generated and carries a sort of “fingerprint” from the specific materials.

From the time delay for an echo to be received by the radar instrument, the distance or the depth of the layers of material producing the echo can be deduced.

While the Mars Express point closest approach is in daylight, MARSIS is only operating at higher frequencies within its capability because the lower-frequency radio signals get disturbed. With these higher frequencies, MARSIS can study the ionosphere and the surface, while some shallow subsurface sounding can still be attempted.

During night-time observations, like those performed briefly last summer immediately after deployment, it is possible for MARSIS to use all frequencies for scientific measurements, including the lowest ones, suitable for penetrating under the soil of Mars.

Tuning to different frequencies for different targets in different conditions is not the only secret of MARSIS. The instrument, responding to signals reflected from any direction, requires scientists also do a huge amount of analysis work to remove these interfering signals from the echoes.

A typical example of what they look for is “clutter backscattering”, which are reflections apparently coming from the subsurface, but actually produced by irregularities in the surface terrain that delay the return of the echo. For this “cleaning” work, the team also makes use of “surface echo simulator” computer programs.

In the first months of operations, MARSIS performed its first ionospheric sounding. The data are converted into typical plots, called “ionograms”, where the altitude at which the echo was generated, deduced by the echo time delay, is given for each transmitted frequency. The intensity of the various echo signals detected is indicated in different colours.

In parallel to the analysis of surface and subsurface signals, the scientists are studying all ionograms to draw the first conclusions on the nature and behaviour of the ionosphere of Mars, and of its interaction with the planet and the surrounding environment.

Original Source: ESA Portal

M-Class Dwarfs Could Be Good For Life After All

The number of HabCat stars, as a function of distance. Image credit: Turnbull, Tarter. Click to enlarge
Scientists have been searching actively for signs of intelligent extraterrestrial civilizations for nearly half a century. Their main approach has been to point radio telescopes toward target stars and to “listen” for electronic transmissions from other worlds. A radio telescope is like a satellite TV dish – only bigger. Just as you can tune your TV to different frequencies, or channels, researchers can use the electronics attached to a radio telescope to monitor different frequencies at which they suspect ET may be transmitting signals out into the galaxy.

So far, no broadcasts have been received. But then, no one knows how many other civilizations with radio transmitters are out there – or, if they exist, where they are likely to be found. It’s only recently that the existence of planets around other stars has been confirmed, and because current planet-finding techniques are limited to detecting relatively large planets, we have yet to find the first Earth-like planet orbiting another star. Most planet hunters believe it’s only a matter of time before we find other Earths, but no one can yet make even a well-founded guess about how many terrestrial planets are in our galactic neighborhood.

With so little information to go on, it has been difficult for scientists involved in SETI (the search for extra-terrestrial intelligence) to decide how to focus their search. So they’ve have had to make some assumptions. One of those assumptions, which may seem a bit odd at first, is that humans are “normal.” That is to say that, because we know for certain that intelligent life evolved on our planet, it stands to reason that other stars like ours may have planets like ours orbiting them, on which other intelligent civilizations have emerged. Based on this terrestrial bias, SETI searches thus far have focused on stars like our sun.

“The observational SETI programs have traditionally confined themselves to looking at stars that are very similar to our own star,” says Jill Tarter, director of the SETI Institute’s Center for SETI Research in Mountain View, California. “Because, after all, that’s the one place where we know that life evolved on a planetary surface and produced a technology that might be detectable across interstellar distances.”

Astronomers classify stars according to their surface temperature. The sun is a G-class star. SETI searches to date have focused on G stars and stars that are either somewhat hotter than the sun (F stars) or somewhat cooler than the sun (K stars). That has yielded a catalog of about a quarter of a million target stars. According to conventional astronomical wisdom, stars hotter than F-class would burn out too quickly for intelligent life to develop on planets that orbit them. Historically, M-dwarf stars, which are dimmer than K stars, also have been dismissed as potential SETI targets.

The two major arguments against M dwarfs have been:

They’re too dim. M dwarfs put out so little solar radiation that a habitable planet would have to be very close-in. Farther-out planets would be frozen solid, too cold for life. A close-in planet would be tidally locked, though, always showing the same face to the star, as the moon does to Earth. The star-facing side would roast, while the opposite side would freeze. Not so good for having lots of liquid water around. And, says Tarter, “Liquid water is essential for life, at least for life as we know it.”

They’re too active. M dwarfs are known to have a lot of solar-flare activity. Solar flares produce UV-B radiation, which can destroy DNA, and X-rays, which in large doses are lethal. Presumably such radiation would be as harmful to extraterrestrial life as it is to life on Earth.

These arguments seem reasonable. But there’s a catch. Most of the stars in the galaxy – more than two-thirds of them – are M dwarfs. If M dwarfs can host habitable planets, those planets might well be home to intelligent species. With radio transmitters. So, as scientists have begun to learn more about other solar systems, and as computer models of solar-system formation have gotten more sophisticated, some SETI researchers have begun to question the assumptions that led them to reject M dwarfs as potential SETI targets.

For example, atmospheric modeling has shown that if a planet orbiting an M dwarf close in had a reasonably thick atmosphere, circulation would transfer the sun’s heat around the planet and even out the temperature worldwide.

“If you put a little bit of greenhouse gas into an atmosphere, the circulations can keep that atmosphere at a reasonable temperature and you can dissipate the heat from the star-facing side and bring it around to the farside. And, perhaps, end up with a habitable world,” says Tarter.

Scientists have also learned that most of an M dwarf’s hyperactivity occurs early in its life cycle, during the first billion years or so. After that, the star tends to settle down and burn quietly for many billions of years more. Once the fireworks end, life might be able to take hold.

The question of M-dwarf habitability is a critical one for Tarter. The SETI Institute is in the process of building a new radio telescope, the Allen Telescope Array. Comprised of 350 small antennas, the array will do double duty: it will be used by radio astronomers to survey the skies and it will search for radio transmissions from extraterrestrial civilizations.

“It’s an observatory that will simultaneously and continuously do traditional radio-astronomy observing and SETI observations,” says Tarter. “It’s the first telescope ever that’s being built to optimize both of those strategies.”

For the most part, traditional radio-astronomy research will determine where the telescope gets pointed; the SETI Institute will simply hitch a ride on the incoming signals. The array combines the signals from the many small antennas to make a large virtual antenna. By adjusting the electronics, researchers will be able to form as many as eight virtual antennas, each pointed at a different star.

That’s where the M-dwarf question comes into play. At the highest frequencies that the telescope can receive, the instrument can focus on only a tiny spot in the sky. For the SETI search to be as efficient as possible, wherever the telescope is pointed, the institute’s researchers want to have several target stars to set their sights on. If only F, G and K stars are considered, there aren’t enough stars to go around. But if M dwarfs are included as targets, the number of prospects could increase as much as ten-fold.

“To make the most progress and to do the fastest survey of the largest number of stars in the next decade or so,” Tarter says, “I want a huge catalog of target stars. I want millions of stars.”

There is no way to know for sure whether M dwarfs host habitable planets. But no one has yet found a habitable planet around any star other than the sun, and it’s unlikely that one will be discovered for many years to come. Technology capable of finding Earth-sized planets is still in the development stage.

To do their work, though, SETI researchers don’t need to know whether or not the stars they’re investigating actually have habitable planets. They simply need to know which stars have the potential to host habitable worlds. Any star with potential belongs on their list.

“It’s not the star that I’m interested in,” Tarter says. “It’s the techno-signature from the inhabitants on a planet around the star. I don’t ever have to see the star, as long as I know that it’s in that direction. I don’t ever have to see the planet. But if I can see their radio transmitter – bingo! – I’ve gotten there. I’ve found a habitable world.”

That’s why Tarter and her colleagues want to know whether or not to include M dwarfs on their target list. To help answer that question, Tarter convened a workshop in July of this year that brought together astronomers, planetary scientists, biologists, and even a few geologists, to explore whether it made sense to add M dwarfs to the catalog of SETI target stars. Although workshop participants did identify some areas that require further research, no insurmountable problems turned up. The group plans to publish its preliminary findings for scrutiny by the wider scientific community.

And that means that if we ever do receive a radio signal from an extraterrestrial civilization, the beings who sent it just might be residents of a solar system with a dim, red M dwarf at its center.

Original Source: NASA Astrobiology

Ariane 5 Lofts Record Payload into Orbit

The heavy-lift Ariane 5 ECA. Image credit: Arienspace. Click to enlarge
During the night of Wednesday, November 16 to Thursday, November 17, Arianespace placed two satellites into geostationary transfer orbit: the SPACEWAY 2 high-definition direct broadcast satellite for the American operator DIRECTV, and the TELKOM 2 communications satellite for the Indonesian operator PT Telekomunikasi Indonesia Tbk.

20th successful Ariane 5 launch, 10th in a row, record payload.

Today’s mission sets a new record for commercial launches: with over 8,000 kg. injected into orbit, the SPACEWAY 2 and TELKOM 2 satellites represent the heaviest dual payload ever launched.

Today, Ariane 5 is the only commercial launcher in service capable of simultaneously launching two payloads. Ariane 5 ECA offers a payload capacity of nearly 10,000 kg. into geostationary transfer orbit, giving Arianespace’s customers enhanced performance, flexibility and competitiveness through the best launch service in the world.

Today’s mission was the 20th successful launch of an Ariane 5, and the 10th successful launch in a row. One week after the successful launch of the Venus Express spacecraft by a Soyuz rocket from the Baikonur Cosmodrome, this confirms that Arianespace, with its complete family of launchers, offers the best launch solution for operators from around the world.

Original Source: Arienspace News Release

Gravity Probe B Will Tell Us If Einstein Was Right

An artist’s concept of twisted space-time around Earth. Image credit: NASA. Click to enlarge
Is Earth in a vortex of space-time?

We’ll soon know the answer: A NASA/Stanford physics experiment called Gravity Probe B (GP-B) recently finished a year of gathering science data in Earth orbit. The results, which will take another year to analyze, should reveal the shape of space-time around Earth–and, possibly, the vortex.

Time and space, according to Einstein’s theories of relativity, are woven together, forming a four-dimensional fabric called “space-time.” The tremendous mass of Earth dimples this fabric, much like a heavy person sitting in the middle of a trampoline. Gravity, says Einstein, is simply the motion of objects following the curvaceous lines of the dimple.

If Earth were stationary, that would be the end of the story. But Earth is not stationary. Our planet spins, and the spin should twist the dimple, slightly, pulling it around into a 4-dimensional swirl. This is what GP-B went to space to check

The idea behind the experiment is simple:

Put a spinning gyroscope into orbit around the Earth, with the spin axis pointed toward some distant star as a fixed reference point. Free from external forces, the gyroscope’s axis should continue pointing at the star–forever. But if space is twisted, the direction of the gyroscope’s axis should drift over time. By noting this change in direction relative to the star, the twists of space-time could be measured.

In practice, the experiment is tremendously difficult.

The four gyroscopes in GP-B are the most perfect spheres ever made by humans. These ping pong-sized balls of fused quartz and silicon are 1.5 inches across and never vary from a perfect sphere by more than 40 atomic layers. If the gyroscopes weren’t so spherical, their spin axes would wobble even without the effects of relativity.

According to calculations, the twisted space-time around Earth should cause the axes of the gyros to drift merely 0.041 arcseconds over a year. An arcsecond is 1/3600th of a degree. To measure this angle reasonably well, GP-B needed a fantastic precision of 0.0005 arcseconds. It’s like measuring the thickness of a sheet of paper held edge-on 100 miles away.

GP-B researchers invented whole new technologies to make this possible. They developed a “drag free” satellite that could brush against the outer layers of Earth’s atmosphere without disturbing the gyros. They figured out how to keep Earth’s penetrating magnetic field out of the spacecraft. And they concocted a device to measure the spin of a gyro–without touching the gyro.

Pulling off the experiment was an exceptional challenge. A lot of time and money was on the line, but the GP-B scientists appear to have done it.

“There were not any major surprises” in the experiment’s performance, says physics professor Francis Everitt, the Principal Investigator for GP-B at Stanford University. Now that data-taking is complete, he says the mood among the GP-B scientists is “a lot of enthusiasm, and a realization also that a lot of grinding hard work is ahead of us.”

A careful, thorough analysis of the data is underway. The scientists will do it in three stages, Everitt explains. First, they will look at the data from each day of the year-long experiment, checking for irregularities. Next they’ll break the data into roughly month-long chunks, and finally they’ll look at the whole year. By doing it this way, the scientists should be able to find any problems that a more simple analysis might miss.

Eventually scientists around the world will scrutinize the data. Says Everitt, “we want our sternest critics to be us.”

The stakes are high. If they detect the vortex, precisely as expected, it simply means that Einstein was right, again. But what if they don’t? There might be a flaw in Einstein’s theory, a tiny discrepancy that heralds a revolution in physics.

First, though, there are a lot of data to analyze. Stay tuned.

Original Source: NASA News Release

Simulation Casts Doubts on One Theory of Star Formation

A slice through a 3-D simulation of a turbulent clump of molecular hydrogen. Image credit: Mark Krumholz. Click to enlarge
Astrophysicists at the University of California, Berkeley, and Lawrence Livermore National Laboratory (LLNL) have exploded one of two competing theories about how stars form inside immense clouds of interstellar gas.

That model, which is less than 10 years old and is championed by some British astronomers, predicts that interstellar hydrogen clouds develop clumps in which several small cores – the seeds of future stars – form. These cores, less than a light year across, collapse under their own gravity and compete for gas in the surrounding clump, often gaining 10 to 100 times their original mass from the clump.

The alternative model, often termed the “gravitational collapse and fragmentation” theory, also presumes that clouds develop clumps in which proto-stellar cores form. But in this theory, the cores are large and, though they may fragment into smaller pieces to form binary or multiple star systems, contain nearly all the mass they ever will.

“In competitive accretion, the cores are seeds that grow to become stars; in our picture, the cores turn into the stars,” explained Chris McKee, professor of physics and of astronomy at UC Berkeley. “The observations to date, which focus primarily on regions of low-mass star formation, like the sun, are consistent with our model and inconsistent with theirs.”

“Competitive accretion is the big theory of star formation in Europe, and we now think it’s a dead theory,” added Richard Klein, an adjunct professor of astronomy at UC Berkeley and a researcher at LLNL.

Mark R. Krumholz, now a post-doctoral fellow at Princeton University, McKee and Klein report their findings in the Nov. 17 issue of Nature.

Both theories try to explain how stars form in cold clouds of molecular hydrogen, perhaps 100 light years across and containing 100,000 times the mass of our sun. Such clouds have been photographed in brilliant color by the Hubble and Spitzer space telescopes, yet the dynamics of a cloud’s collapse into one or many stars is far from clear. A theory of star formation is critical to understanding how galaxies and clusters of galaxies form, McKee said.

“Star formation is a very rich problem, involving questions such as how stars like the sun formed, why a very large number of stars are in binary star systems, and how stars ten to a hundred times the mass of the sun form,” he said. “The more massive stars are important because, when they explode in a supernova, they produce most of the heavy elements we see in the material around us.”

The competitive accretion model was hatched in the late 1990s in response to problems with the gravitational collapse model, which seemed to have trouble explaining how large stars form. In particular, the theory couldn’t explain why the intense radiation from a large protostar doesn’t just blow off the star’s outer layers and prevent it from growing larger, even though astronomers have discovered stars that are 100 times the mass of the sun.

While theorists, among them McKee, Klein and Krumholz, have advanced the gravitational collapse theory farther toward explaining this problem, the competitive accretion theory has come increasingly into conflict with observations. For example, the accretion theory predicts that brown dwarfs, which are failed stars, are thrown out of clumps and lose their encircling disks of gas and dust. In the past year, however, numerous brown dwarfs have been found with planetary disks.

“Competitive accretion theorists have ignored these observations,” Klein said. “The ultimate test of any theory is how well it agrees with observation, and here the gravitational collapse theory appears to be the clear winner.”

The model used by Krumholz, McKee and Klein is a supercomputer simulation of the complicated dynamics of gas inside a swirling, turbulent cloud of molecular hydrogen as it accretes onto a star. Theirs is the first study of the effects of turbulence on the rate at which a star accretes matter as it moves through a gas cloud, and it demolishes the “competitive accretion” theory.

Employing 256 parallel processors at the San Diego Supercomputer Center at UC San Diego, they ran their model for nearly two weeks to show that it accurately represented star formation dynamics.

“For six months, we worked on very, very detailed, high-resolution simulations to develop that theory,” Klein said. “Then, having that theory in hand, we applied it to star forming regions with the properties that one could glean from a star forming region.”

The models, which also were run on supercomputers at Lawrence Berkeley National Laboratory and LLNL, showed that turbulence in the core and surrounding clump would prevent accretion from adding much mass to a protostar.

“We have shown that, because of turbulence, a star cannot efficiently accrete much more mass from the surrounding clump,” Klein said. “In our theory, once a core collapses and fragments, that star basically has all the mass it is ever going to have. If it was born in a low-mass core, it will end up being a low-mass star. If it’s born in a high mass core, it may become a high-mass star.”

McKee noted that the researchers’ supercomputer simulation indicates competitive accretion may work well for small clouds with very little turbulence, but these rarely, if ever, occur and have not been observed to date. Real star formation regions have much more turbulence than assumed in the accretion model, and the turbulence does not quickly decay, as that model presumes. Some unknown processes, perhaps matter flowing out of protostars, keep the gases roiled up so that the core does not collapse quickly.

“Turbulence opposes gravity; without it, a molecular cloud would collapse far more rapidly than observed,” Klein said. “Both theories assume turbulence is there. The key is (that) there are processes going on as stars begin to form that keep turbulence alive and prevent it from decaying. The competitive accretion model doesn’t have any way to put this into the calculations, which means they’re not modeling real star forming regions.”

Klein, McKee and Krumholz continue to refine their model to explain how radiation from large protostars escapes without blowing away all the infalling gas. For example, they have shown that some of the radiation can escape through cavities created by the jets observed to come out the poles of many stars in formation. Many predictions of the theory may be answered by new and larger telescopes now under construction, in particular the sensitive, high-resolution ALMA telescope being constructed in Chile by a consortium of United States, European and Japanese astronomers, McKee said.

The work was supported by the National Aeronautics and Space Administration, the National Science Foundation and the Department of Energy.

Original Source: UC Berkeley News Release