Hubble Helps Measure the Pace of Dark Energy

Image credit: Hubble
The good news from NASA’s Hubble Space Telescope is that Einstein was right ? maybe.

A strange form of energy called “dark energy” is looking a little more like the repulsive force that Einstein theorized in an attempt to balance the universe against its own gravity. Even if Einstein turns out to be wrong, the universe’s dark energy probably won’t destroy the universe any sooner than about 30 billion years from now, say Hubble researchers.

“Right now we’re about twice as confident than before that Einstein’s cosmological constant is real, or at least dark energy does not appear to be changing fast enough (if at all) to cause an end to the universe anytime soon,” says Adam Riess of the Space Telescope Science Institute, Baltimore.

Riess used Hubble to find nature’s own “weapons of mass destruction” ? very distant supernovae that exploded when the universe was less than half its current age. The apparent brightness of a certain type of supernova gives cosmologists a way to measure the expansion rate of the universe at different times in the past.

Riess and his team joined efforts with the Great Observatories Origins Deep Survey (GOODS) program, the largest deep galaxy survey attempted by Hubble to date, to turn the Space Telescope into a supernova search engine on an unprecedented scale. In the process, they discovered 42 new supernovae in the GOODS area, including 6 of the 7 most distant known.

Cosmologists understand almost nothing about dark energy even though it appears to comprise about 70 percent of the universe. They are desperately seeking to uncover its two most fundamental properties: its strength and its permanence.

In a paper to be published in the Astrophysical Journal, Riess and his collaborators have made the first meaningful measurement of the second property, its permanence.

Currently, there are two leading interpretations for the dark energy as well as many more exotic possibilities. It could be an energy percolating from empty space as Einstein’s theorized “cosmological constant,” an interpretation which predicts that dark energy is unchanging and of a prescribed strength.

An alternative possibility is that dark energy is associated with a changing energy field dubbed “quintessence.”

This field would be causing the current acceleration ? a milder version of the inflationary episode from which the early universe emerged.

When astronomers first realized the universe was accelerating, the conventional wisdom was that it would expand forever. However, until we better understand the nature of dark energy?its properties?other scenarios for the fate of the universe are possible.

If the repulsion from dark energy is or becomes stronger than Einstein’s prediction, the universe may be torn apart by a future “Big Rip,” during which the universe expands so violently that first the galaxies, then the stars, then planets, and finally atoms come unglued in a catastrophic end of time. Currently this idea is very speculative, but being pursued by theorists.

At the other extreme, a variable dark energy might fade away and then flip in force such that it pulls the universe together rather then pushing it apart.

This would lead to a “big crunch” where the universe ultimately implodes. “This looks like the least likely scenario at present,” says Riess.

Understanding dark energy and determining the universe’s ultimate fate will require further observations. Hubble and future space telescopes capable of looking more than halfway across the universe will be needed to achieve the necessary precision. The determination of the properties of dark energy has become the key goal of astronomy and physics today.

Original Source: Hubble News Release

Ulysses Finds Streams of Dust Coming from Io

Image credit: ESA
In a repeat performance of its groundbreaking discovery in 1992, the DUST instrument on board Ulysses has detected streams of dust particles flowing from Jupiter during the recent second encounter with the giant planet.

The dust streams, comprising grains no larger than smoke particles, originate in the fiery volcanoes of Jupiter?s moon Io. The dust stream particles, which carry an electric charge, are strongly influenced by Jupiter’s magnetic field. Electromagnetic forces propel the dust out of the Jovian system, into interplanetary space.

“The recent observations include the most distant dust stream ever recorded – 3.3 AU (nearly 500 million km) from Jupiter!? said Dr. Harald Kr?ger, from the Max-Planck-Institut f?r Kernphysik in Heidelberg. Another unusual feature is that the streams occur with a period of about 28 days. This suggests that they are influenced by solar wind streams that rotate with the Sun. “Interestingly, the most intense peaks show some fine structure which was not the case in 1992?, said Kr?ger, Principal Investigator for the DUST instrument.

Early on in the history of the solar system, as the planets were being formed, small dust particles were much more abundant. These charged grains were influenced by magnetic fields from the early Sun, in much the same way as the dust from Io is affected by Jupiter’s magnetic field today. “By studying the behaviour of these dust stream particles, we hope to gain an insight into processes that led to the formation of the moons and planets in our solar system?, said Richard Marsden, ESA?s Mission Manager for Ulysses. Dust particles carry information about charging processes in regions of Jupiter?s magnetosphere that are difficult to access by other means.

Original Source: ESA News Release

Look for Dust to Find New Earths

Image credit: NASA
If alien astronomers around a distant star had studied the young Sun four-and-a-half billion years ago, could they have seen signs of a newly-formed Earth orbiting this innocuous yellow star? The answer is yes, according to Scott Kenyon (Smithsonian Astrophysical Observatory) and Benjamin Bromley (University of Utah). Moreover, their computer model says that we can use the same signs to locate places where Earth-size planets currently are forming-young worlds that, one day, may host life of their own.

The key to locating newborn Earths, say Kenyon and Bromley, is to look not for the planet itself, but for a ring of dust orbiting the star that is a fingerprint of terrestrial (rocky) planet formation.

“Chances are, if there’s a ring of dust, there’s a planet,” says Kenyon.

Good Planets Are Hard To Find

Our solar system formed from a swirling disk of gas and dust, called a protoplanetary disk, orbiting the young Sun. The same materials are found throughout our galaxy, so the laws of physics predict that other star systems will form planets in a similar manner.

Although planets may be common, they are difficult to detect because they are too faint and located too close to a much brighter star. Therefore, astronomers seek planets by looking for indirect evidence of their existence. In young planetary systems, that evidence may be present in the disk itself, and in how the planet affects the dusty disk from which it forms.

Large, Jupiter-sized planets possess strong gravity. That gravity strongly affects the dusty disk. A single Jupiter can clear a ring-shaped gap in the disk, warp the disk, or create concentrated swaths of dust that leave a pattern in the disk like a wake from a boat. The presence of a giant planet may explain the wake-like pattern seen in the disk around the 350 million-year-old star Vega.

Small, Earth-sized worlds, on the other hand, possess weaker gravity. They affect the disk more weakly, leaving more subtle signs of their presence. Rather than looking for warps or wakes, Kenyon and Bromley recommend looking to see how bright the star system is at infrared (IR) wavelengths of light. (Infrared light, which we perceive as heat, is light with longer wavelengths and less energy than visible light.)

Stars with dusty disks are brighter in the IR than stars without disks. The more dust a star system holds, the brighter it is in the IR. Kenyon and Bromley have shown that astronomers can use IR brightnesses not only to detect a disk, but also to tell when an Earth-sized planet is forming within that disk.

“We were the first to calculate the expected levels of dust production and associated infrared excesses, and the first to demonstrate that terrestrial planet formation produces observable amounts of dust,” says Bromley.

Building Planets From The Ground Up
The most prevalent theory of planet formation calls for building planets “from the ground up.” According to the coagulation theory, small bits of rocky material in a protoplanetary disk collide and stick together. Over thousands of years, small clumps grow into larger and larger clumps, like building a snowman one handful of snow at a time. Eventually, the rocky clumps grow so large that they become full-fledged planets.

Kenyon and Bromley model the planet formation process using a complex computer program. They “seed” a protoplanetary disk with a billion planetesimals 0.6 miles (1 kilometer) in size, all orbiting a central star, and step the system forward in time to see how planets evolve from those basic ingredients.

“We made the simulation as realistic as we could and still complete the calculations in a reasonable amount of time,” says Bromley.

They found the planet formation process to be remarkably efficient. Initially, collisions between planetesimals occur at low velocities, so colliding objects tend to merge and grow. At a typical Earth-Sun distance, it takes only about 1000 years for 1-kilometer objects to grow into 100-kilometer (60-mile) objects. Another 10,000 years produces 600-mile-diameter protoplanets, which grow over an additional 10,000 years to become 1200-mile-diameter protoplanets. Hence, Moon-sized objects can form in as little as 20,000 years.

As planetesimals within the disk grow larger and more massive, their gravity grows stronger. Once a few of the objects reach a size of 600 miles, they begin “stirring up” the remaining smaller objects. Gravity slingshots the smaller, asteroid-sized chunks of rock to higher and higher speeds. They travel so fast that when they collide, they don’t merge-they pulverize, smashing each other apart violently. While the largest protoplanets continue to grow, the rest of the rocky planetesimals grind each other into dust.

“The dust forms right where the planet is forming, at the same distance from its star,” says Kenyon. As a result, the temperature of the dust indicates where the planet is forming. Dust in a Venus-like orbit will be hotter than dust in an Earth-like orbit, giving a clue to the infant planet’s distance from its star.

The size of the largest objects in the disk determines the dust production rate. The amount of dust peaks when 600-mile protoplanets have formed.

“The Spitzer Space Telescope should be able to detect such dust peaks,” says Bromley.

Currently, Kenyon and Bromley’s terrestrial planet formation model covers only a fraction of the solar system, from the orbit of Venus to a distance about halfway between Earth and Mars. In the future, they plan to extend the model to encompass orbits as close to the Sun as Mercury and as distant as Mars.

They also have modeled the formation of the Kuiper Belt-a region of small, icy and rocky objects beyond the orbit of Neptune. The next logical step is to model the formation of gas giants like Jupiter and Saturn.

“We’re starting at the edges of the solar system and working inward,” Kenyon says with a grin. “We’re also working out way up in mass. The Earth is 1000 times more massive than a Kuiper Belt object, and Jupiter is 1000 times more massive than the Earth.”

“Our ultimate goal is to model and understand the formation of our entire solar system.” Kenyon estimates that their goal is attainable within a decade, as computer speed continues to increase, enabling the simulation of an entire solar system.

This research was published in the February 20, 2004, issue of The Astrophysical Journal Letters. Additional information and animations are available online at http://cfa-www.harvard.edu/~kenyon/.

Headquartered in Cambridge, Mass., the Harvard-Smithsonian Center for Astrophysics is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.

Original Source: CfA News Release

New Kuiper Object Rivals Pluto

Image credit: Caltech
Planetary scientists at the California Institute of Technology and Yale University on Tuesday night discovered a new planetoid in the outer fringes of the solar system.

The planetoid, currently known only as 2004 DW, could be even larger than Quaoar–the current record holder in the area known as the Kuiper Belt–and is some 4.4 billion miles from Earth.

According to the discoverers, Caltech associate professor of planetary astronomy Mike Brown and his colleagues Chad Trujillo (now at the Gemini North observatory in Hawaii), and David Rabinowitz of Yale University, the planetoid was found as part of the same search program that discovered Quaoar in late 2002. The astronomers use the 48-inch Samuel Oschin Telescope at Palomar Observatory and the recently installed QUEST CCD camera built by a consortium including Yale and the University of Indiana, to systematically study different regions of the sky each night.

Unlike Quaoar, the new planetoid hasn’t yet been pinpointed on old photographic plates or other images. Because its orbit is therefore not well understood yet, it cannot be given an official name.

“So far we only have a one-day orbit,” said Brown, explaining that the data covers only a tiny fraction of the orbit the object follows in its more than 300-year trip around the sun. “From that we know only how far away it is and how its orbit is tilted relative to the planets.”

The tilt that Brown has measured is an astonishingly large 20 degrees, larger even than that of Pluto, which has an orbital inclination of 17 degrees and is an anomaly among the otherwise planar planets.

The size of 2004 DW is not yet certain; Brown estimates a size of about 1,400 kilometers, based on a comparison of the planetoid’s luminosity with that of Quaoar. Because the distance of the object can already be calculated, its luminosity should be a good indicator of its size relative to Quaoar, provided the two objects have the same albedo, or reflectivity.

Quaoar is known to have an albedo of about 10 percent, which is slightly higher than the reflectivity of our own moon. Thus, if the new object is similar, the 1,400-kilometer estimate should hold. If its albedo is lower, then it could actually be somewhat larger; or if higher, smaller.

According to Brown, scientists know little about the albedos of objects this large this far away, so the true size is quite uncertain. Researchers could best make size measurements with the Hubble Space Telescope or the newer Spitzer Space Telescope. The continued discovery of massive planetoids on the outer fringe of the solar system is further evidence that objects even farther and even larger are lurking out there. “It’s now only a matter of time before something is going to be discovered out there that will change our entire view of the outer solar system,” Brown says.

The team is working hard to uncover new information about the planetoid, which they will release as it becomes available, Brown adds. Other telescopes will also be used to better characterize the planetoid’s features.

Original Source: Caltech News Release

Nasca Lines Imaged from Orbit

Image credit: ESA
Visible from ESA’s Proba spacecraft 600 kilometres away in space are the largest of the many Nasca Lines; ancient desert markings now at risk from human encroachment as well as flood events feared to be increasing in frequency.

Designated a World Heritage Site in 1994, the Lines are a mixture of animal figures and long straight lines etched across an area of about 70 km by 30 km on the Nasca plain, between the Andes and Pacific Coast at the southern end of Peru. The oldest lines date from around 400 BC and went on being created for perhaps a thousand years.

They were made simply enough, by moving dark surface stones to expose pale sand beneath. However their intended purpose remains a mystery. It has variously been proposed they were created as pathways for religious processions and ceremonies, an astronomical observatory or a guide to underground water resources.

The Nasca Lines have been preserved down the centuries by extreme local dryness and a lack of erosion mechanisms, but are now coming increasingly under threat: it is estimated the last 30 years saw greater erosion and degradation of the site than the previous thousand years before them.

In this image, acquired by the Compact High Resolution Imaging Spectrometer (CHRIS) instrument aboard Proba on 26 September 2003, the 18.6 metre resolution is too low to make out the animal figures although the straight Nasca Lines can be seen faintly. Clearest of the straight markings is actually the Pan-American Highway, built right through the region ? seen as a dark marking starting at the irrigated fields beside the Ingenio River, running from near the image top to the bottom right hand corner. Associated dirt track roads are also visible amidst the Nasca Lines.

Clearly shown in the Proba image is another cause of damage to the Lines: deposits left by mudslides after heavy rains in the Andean Mountains. These events are believed to be connected to the El Ni?o phenomenon in the Pacific Ocean ? first named by Peruvian fishermen hundreds of years ago ? and one concern is they are becoming more frequent due to climate change.

A team from Edinburgh University and remote sensing company Vexcel UK has been using data from another ESA spacecraft to measure damage to the Nasca Lines, with their results due to be published in the May Issue of the International Journal of Remote Sensing.

Their work involves combining radar images from the Synthetic Aperture Radar (SAR) instrument aboard ERS-2. Instead of measuring reflected light, SAR makes images from backscattered radar signals that chart surface roughness.

Nicholas Walker of Vexcel UK explained: “Although the instrument lacks sufficient resolution to unambiguously distinguish individual lines and shapes, by combining two satellite images using a technique known as SAR interferometric coherence it is possible to detect erosion and changes to the surface at the scale of centimetres”.

The image shown combines two scenes acquired by ERS-2 in 1997 and 1999. The bright areas show where there has been very little terrain change in the interval, while darker regions show where de-correlation has occurred, highlighting possible sites where erosion may be taking place.

“Some de-correlation comes simply from the geometry of the area as seen by the instrument in space, with low coherence around areas overshadowed by Andean foothills to the east of the Nasca plain,” said Iain Woodhouse of Edinburgh University. “The second major loss is seen in the river valleys, due primarily to agricultural activity taking place during the two-year period.

“The third is changes in the surface of the plain due to run-off and human activity. The dark lines crossing the plain are roads and tracks serving local communities and the power line, as well as the Pan American Highway, the only surfaced road in this region of Peru.”

The de-correlation observed is most likely caused by vehicles displacing stones along these tracks and the sides of the Pan-American Highway. The de-correlation from the run-off is distinct from this as it follows the characteristic drainage patterns down from the foothills.

“Interferometric coherence seems to provide an effective means for monitoring these two major sources of risk to the integrity of the markings,” Woodhouse concluded. “We are developing the technique to include more sensors and data of higher spatial resolution, so as to encourage the establishment of a long term and frequent monitoring programme supporting conservation efforts in the area.”

Original Source: ESA News Release

Hubble Sees a Ring of Pearls Around 1987 Supernova

Image credit: Hubble
Seventeen years ago, astronomers spotted the brightest stellar explosion ever seen since the one observed by Johannes Kepler 400 years ago. Called SN 1987A, the titanic supernova explosion blazed with the power of 100,000,000 suns for several months following its discovery on Feb. 23, 1987. Although the supernova itself is now a million times fainter than 17 years ago, a new light show in the space surrounding it is just beginning.

This image, taken Nov. 28, 2003 by the Advanced Camera for Surveys aboard NASA’s Hubble Space Telescope, shows many bright spots along a ring of gas, like pearls on a necklace. These cosmic “pearls” are being produced as a supersonic shock wave unleashed during the explosion slams into the ring at more than a million miles per hour. The collision is heating the gas ring, causing its innermost regions to glow.

Astronomers detected the first “hot spot” in 1996, but now they see dozens of them all around the ring. The temperature of the flares surges from a few thousand degrees to a million degrees Fahrenheit. Individual hot spots cannot be seen from ground-based telescopes. Only Hubble can resolve them.

And, more hot spots are coming. In the next few years, the entire ring will be ablaze as it absorbs the full force of the crash. The glowing ring is expected to become bright enough to illuminate the star’s surroundings, thus providing astronomers with new information on how the star ejected material before the explosion.

The elongated and expanding object in the middle of the ring is debris from the supernova blast. The glowing debris is being heated by radioactive elements, principally titanium 44, that were created in the supernova explosion. The debris will continue to glow for many decades.

The ring, about a light-year across, already existed when the star exploded. Astronomers believe the star shed the ring about 20,000 years before the supernova blast.

The violent death of a star 20 times more massive than the Sun, called a supernova, created this stellar drama. The star actually exploded about 160,000 years ago, but it has taken that long for its light to reach Earth. The supernova resides in the Large Magellanic Cloud, a nearby small galaxy that is a satellite or our Milky Way galaxy.

Since its launch in 1990, the Hubble telescope has watched the supernova drama unfold, taking periodic snapshots of the gradually fading ring. Now, the orbiting observatory will continue to monitor the ring as it brightens from this collision.

Original Source: Hubble Space Telescope

Shuttle Launch Pushed Back to 2005

Space shuttle director Michael Kostelnik announced this week that work to return the shuttle fleet to flight was going slower than planned, probably pushing the next launch to March or April 2005. The delays have largely been caused by the difficulty engineers are having ensuring that chunks of foam can’t fall off the shuttle tank – like the one that led to the destruction of Columbia last year. NASA Administrator Sean O’Keefe had already commented to a Congressional science committee that a launch this year was unlikely.

Secret Russian Satellite Launched

Image credit: Starsem
The 1686th flight of a Soyuz family launch vehicle (Molnia) was performed Wednesday, February 18, 2004 from the Plesetsk Cosmodrome in Russia at 10:05 a.m. Moscow time (8:05 a.m., in Paris).

Starsem and its Russian partners report that the governmental spacecraft was accurately placed on the target orbit.

The launch was performed in the presence of Vladimir Putin, the President of the Russian Federation.

This was the second Soyuz family mission in 2004. Last year, Soyuz was launched 10 times with 100% success and performed its first GTO mission with Israeli Amos 2 satellite. Ten Soyuz flights are planned for 2004.

Soyuz sustained launch rate confirms its position as one of the world’s primary launch vehicles. This rate also demonstrates Samara Space Center’s continuous production capacity, as well as the operational capability of launch teams at Baikonur under the authority of the Russian Aviation and Space Agency.

Starsem is the Soyuz Company, bringing together all key players involved in the production, operation and international commercial marketing of the world’s most versatile launch vehicle. Shareholders in Starsem are Arianespace, EADS, the Russian Aviation and Space Agency and the Samara Space Center.

The Starsem manifest for Soyuz missions currently includes contracted launches for the European Space Agency and Eumetsat.

Original Source: Starsem News Release

What are the Risks of Radiation for Humans in Space?

Image credit: NASA
NASA has a mystery to solve: Can people go to Mars, or not?

“It’s a question of radiation,” says Frank Cucinotta of NASA’s Space Radiation Health Project at the Johnson Space Center. “We know how much radiation is out there, waiting for us between Earth and Mars, but we’re not sure how the human body is going to react to it.”

NASA astronauts have been in space, off and on, for 45 years. Except for a few quick trips to the moon, though, they’ve never spent much time far from Earth. Deep space is filled with protons from solar flares, gamma rays from newborn black holes, and cosmic rays from exploding stars. A long voyage to Mars, with no big planet nearby to block or deflect that radiation, is going to be a new adventure.

NASA weighs radiation danger in units of cancer risk. A healthy 40-year-old non-smoking American male stands a (whopping) 20% chance of eventually dying from cancer. That’s if he stays on Earth. If he travels to Mars, the risk goes up.

The question is, how much?

“We’re not sure,” says Cucinotta. According to a 2001 study of people exposed to large doses of radiation–e.g., Hiroshima atomic bomb survivors and, ironically, cancer patients who have undergone radiation therapy–the added risk of a 1000-day Mars mission lies somewhere between 1% and 19%. “The most likely answer is 3.4%,” says Cucinotta, “but the error bars are wide.”

The odds are even worse for women, he adds. “Because of breasts and ovaries, the risk to female astronauts is nearly double the risk to males.”

Researchers who did the study assumed the Mars-ship would be built “mostly of aluminum, like an old Apollo command module,” says Cucinotta. The spaceship’s skin would absorb about half the radiation hitting it.

“If the extra risk is only a few percent? we’re OK. We could build a spaceship using aluminum and head for Mars.” (Aluminum is a favorite material for spaceship construction, because it’s lightweight, strong, and familiar to engineers from long decades of use in the aerospace industry.)

“But if it’s 19%? our 40something astronaut would face a 20% + 19% = 39% chance of developing life-ending cancer after he returns to Earth. That’s not acceptable.”

The error bars are large, says Cucinotta, for good reason. Space radiation is a unique mix of gamma-rays, high-energy protons and cosmic rays. Atomic bomb blasts and cancer treatments, the basis of many studies, are no substitute for the “real thing.”

The greatest threat to astronauts en route to Mars is galactic cosmic rays–or “GCRs” for short. These are particles accelerated to almost light speed by distant supernova explosions. The most dangerous GCRs are heavy ionized nuclei such as Fe+26. “They’re much more energetic (millions of MeV) than typical protons accelerated by solar flares (tens to hundreds of MeV),” notes Cucinotta. GCRs barrel through the skin of spaceships and people like tiny cannon balls, breaking the strands of DNA molecules, damaging genes and killing cells.

Astronauts have rarely experienced a full dose of these deep space GCRs. Consider the International Space Station (ISS): it orbits only 400 km above Earth’s surface. The body of our planet, looming large, intercepts about one-third of GCRs before they reach the ISS. Another third is deflected by Earth’s magnetic field. Space shuttle astronauts enjoy similar reductions.

Apollo astronauts traveling to the moon absorbed higher doses–about 3 times the ISS level–but only for a few days during the Earth-moon cruise. GCRs may have damaged their eyes, notes Cucinotta. On the way to the moon, Apollo crews reported seeing cosmic ray flashes in their retinas, and now, many years later, some of them have developed cataracts. Otherwise they don’t seem to have suffered much. “A few days ‘out there’ is probably safe,” concludes Cucinotta.

But astronauts traveling to Mars will be “out there” for a year or more. “We can’t yet estimate, reliably, what cosmic rays will do to us when we’re exposed for so long,” he says.

Finding out is the mission of NASA’s new Space Radiation Laboratory (NSRL), located at the US Department of Energy’s Brookhaven National Laboratory in New York. It opened in October 2003. “At the NSRL we have particle accelerators that can simulate cosmic rays,” explains Cucinotta. Researchers expose mammalian cells and tissues to the particle beams, and then scrutinize the damage. “The goal is to reduce the uncertainty in our risk estimates to only a few percent by the year 2015.”

Once the risks are known, NASA can decide what kind of spaceship to build. It’s possible that ordinary building materials like aluminum are good enough. If not, “we’ve already identified some alternatives,” he says.

How about a spaceship made of plastic?

“Plastics are rich in hydrogen–an element that does a good job absorbing cosmic rays,” explains Cucinotta. For instance, polyethylene, the same material garbage bags are made of, absorbs 20% more cosmic rays than aluminum. A form of reinforced polyethylene developed at the Marshall Space Flight Center is 10 times stronger than aluminum, and lighter, too. This could become a material of choice for spaceship building, if it can be made cheaply enough. “Even if we don’t build the whole spacecraft from plastic,” notes Cucinotta, “we could still use it to shield key areas like crew quarters.” Indeed, this is already done onboard the ISS.

If plastic isn’t good enough then pure hydrogen might be required. Pound for pound, liquid hydrogen blocks cosmic rays 2.5 times better than aluminum does. Some advanced spacecraft designs call for big tanks of liquid hydrogen fuel, so “we could protect the crew from radiation by wrapping the fuel tank around their living space,” speculates Cucinotta.

Can people go to Mars? Cucinotta believes so. But first, “we’ve got to figure out how much radiation our bodies can handle and what kind of spaceship we need to build.” In labs around the country, the work has already begun.

Original Source: NASA Science Story

Interstellar Cloud of Gas is a Natural Lens

Image credit: Chandra
Imagine making a natural telescope more powerful than any other telescope currently operating. Then imagine using it to view closer to the edge of a black hole where its mouth is like a jet that forms super-hot charged particles and spits them millions of light-years into space. The task would seem to take one to the edge of no-return, a violent spot four billion light-years from Earth. That place is called a quasar named PKS 1257-326. Its faint twinkle in the sky is given the more catchy name of a ‘blazar’, meaning it is a quasar that varies dramatically in brightness, and may mask an even more mysterious, inner black hole of enormous gravitational power.

The length of a telescope needed to peer into the mouth of the blazar would have to be gigantic, about a million kilometers wide. But just such a natural lens has been found by a team of Australian and European astronomers; its lens is remarkably, a cloud of gas. The idea of a vast, natural telescope seems too elegant to avoid peering into.

The technique, dubbed ‘Earth-Orbit Synthesis’, was first outlined by Dr Jean-Pierre Macquart of the University of Groningen in The Netherlands and CSIRO’s Dr David Jauncey in a paper published in 2002. The new technique promises researchers the ability to resolve details about 10 microarcseconds across – equivalent to seeing a sugar cube on the Moon, from Earth.

“That’s a hundred times finer detail than we can see with any other current technique in astronomy,” says Dr. Hayley Bignall, who recently completed her PhD at the University of Adelaide and is now at JIVE, the Joint Institute for Very Long Baseline Interferometry in Europe. “It’s ten thousand times better than the Hubble Space Telescope can do. And it’s as powerful as any proposed future space-based optical and X-ray telescopes.”

Bignall made the observations with the CSIRO Australia Telescope Compact Array radio telescope in eastern Australia. When she refers to a microarcsecond, that is a measure of angular size, or how big an object looks. If for instance the sky were divided by degrees as a hemisphere, the unit is about a third of a billionth of one degree.

How does the largest telescope work? Using the clumpiness inside a cloud of gas is not entirely unfamiliar to night-watchers. Like atmospheric turbulence makes the stars twinkle, our own galaxy has a similar invisible atmosphere of charged particles that fill the voids between stars. Any clumping of this gas naturally can form a lens, just like the density change from air-to-glass bent and focused the light in what Galileo first saw when he pointed his first telescope towards the star. The effect is also called scintillation, and the cloud acts like a lens.

Seeing better than anyone else may be remarkable, but how to decide where to look first? The team is particularly interested using ‘Earth-Orbit Synthesis’ to peer close to black holes in quasars, which are the super-bright cores of distant galaxies. These quasars subtend such small angles on the sky as to be mere points of light or radio emission. At radio wavelengths, some quasars are small enough to twinkle in our Galaxy’s atmosphere of charged particles, called the ionized interstellar medium. Quasars twinkle or vary much more slowly than the twinkling one might associate with visible stars. So observers have to be patient to view them, even with the help of the most powerful telescopes. Any change in less than a day is considered to be fast. The fastest scintillators have signals that double or treble in strength in less than an hour. In fact the best observations made so far benefit from the annual motion of the Earth, since the yearly variation gives a complete picture, potentially allowing astronomers to see the violent changes in the mouth of a black-hole jet. That’s one of the team’s goals: “to see to within a third of a light-year of the base of one of these jets,” according to CSIRO’s Dr David Jauncey. “That’s the ‘business end’ where the jet is made.”

It is not possible to “see” into a black hole, because these collapsed stars are so dense, that their overpowering gravity doesn’t even allow light to escape. Only the behavior of matter outside a horizon some distance away from a black-hole can signal that they even exist. The largest telescope may help the astronomers understand the size of a jet at its base, the pattern of magnetic fields there, and how a jet evolves over time. “We can even look for changes as matter strays near the black hole and is spat out along the jets, ” says Dr Macquart.

Astrobiology Magazine had the opportunity to talk with Hayley Bignall about how to make a telescope from gas clouds, and why peering deeper than anyone before may offer insight into remarkable events near black holes. Astrobiology Magazine (AM): How did you first become interested in using gas clouds as part of a natural focus for resolving very distant objects?

Hayley Bignall (HB): The idea of using interstellar scintillation (ISS), a phenomenon due to radio wave scattering in turbulent, ionized Galactic gas “clouds”, to resolve very distant, compact objects, really represents the convergence of a couple of different lines of research, so I will outline a little of the historical background.

In the 1960s, radio astronomers used another kind of scintillation, interplanetary scintillation, due to scattering of radio waves in the solar wind, to measure sub-arcsecond (1 arcsecond = 1/3600 degrees of arc) angular sizes for radio sources. This was higher resolution than could be achieved by other means at the time. But these studies largely fell by the wayside with the advent of Very Long Baseline Interferometry (VLBI) in the late 1960s, which allowed direct imaging of radio sources with much higher angular resolution – today, VLBI achieves resolution better than a milliarcsecond.

I personally became interested in potential uses of interstellar scintillation through being involved in studies of radio source variability – in particular, variability of “blazars”. Blazar is a catchy name applied to some quasars and BL Lacertae objects – that is, Active Galactic Nuclei (AGN), probably containing supermassive black holes as their “central engines”, which have powerful jets of energetic, radiating particles pointed almost straight at us.

We then see effects of relativistic beaming in the radiation from the jet, including rapid variability in intensity across the whole electromagnetic spectrum, from radio to high-energy gamma rays. Most of the observed variability in these objects could be explained, but there was a problem: some sources showed very rapid, intra-day radio variability. If such short time-scale variability at such long (centimeter) wavelengths were intrinsic to the sources, they would be far too hot to stay around for years, as many were observed to do. Sources that hot should radiate all their energy away very quickly, as X-rays and gamma-rays. On the other hand, it was already known that interstellar scintillation affects radio waves; so the question of whether the very rapid radio variability was in fact ISS, or intrinsic to the sources, was an important one to resolve.

During my PhD research I found, by chance, rapid variability in the quasar (blazar) PKS 1257-326, which is one of the three most rapidly radio variable AGN ever observed. My colleagues and I were able to show conclusively that the rapid radio variability was due to ISS [scintillation]. The case for this particular source added to mounting evidence that intra-day radio variability in general is predominantly due to ISS.

Sources which show ISS must have very small, microarcsecond, angular sizes. Observations of ISS can in turn be used to “map” source structure with microarcsecond resolution. This is much higher resolution than even VLBI can achieve. The technique was outlined in a 2002 paper by two of my colleagues, Dr Jean-Pierre Macquart and Dr David Jauncey.

The quasar PKS 1257-326 proved to be a very nice “guinea pig” with which to demonstrate that the technique really works.

AM: The principles of scintillation are visible to anyone even without a telescope, correct–where a star twinkles because it covers a very small angle in the sky (being so far away), but a planet in our solar system doesn’t scintillate visibly? Is this a fair comparison of the principle for estimating distances visually with scintillation?

HB: The comparison with seeing stars twinkle as a result of atmospheric scintillation (due to turbulence and temperature fluctuations in the Earth’s atmosphere) is a fair one; the basic phenomenon is the same. We don’t see planets twinkle because they have much larger angular sizes – the scintillation gets “smeared out” over the planet’s diameter. In this case, of course, it is because the planets are so close to us that they subtend larger angles on the sky than stars.

Scintillation is not really useful for estimating distances to quasars, however: objects that are further away do not always have smaller angular sizes. For example, all pulsars (spinning neutron stars) in our own Galaxy scintillate because they have very tiny angular sizes, much smaller than any quasar, even though quasars are often billions of light-years away. In fact, scintillation has been used to estimate pulsar distances. But for quasars, there are many factors besides distance which affect their apparent angular size, and to complicate matters further, at cosmological distances, the angular size of an object no longer varies as the inverse of distance. Generally the best way of estimating the distance to a quasar is to measure the redshift of its optical spectrum. Then we can convert measured angular scales (e.g. from scintillation or VLBI observations) to linear scales at the redshift of the source

AM: The telescope as described offers a quasar example that is a radio source and observed to vary over an entire year. Are there any natural limits to the types of sources or the length of observation?

HB: There are angular size cut-offs, beyond which the scintillation gets “quenched”. One can picture the radio source brightness distribution as a bunch of independently scintillating “patches” of a given size, so that as the source gets larger, the number of such patches increases, and eventually the scintillation over all the patches averages out so that we cease to observe any variations at all. From previous observations we know that for extragalactic sources, the shape of the radio spectrum has a lot to do with how compact a source is – sources with “flat” or “inverted” radio spectra (i.e. flux density increasing towards shorter wavelengths) are generally the most compact. These also tend to be “blazar”-type sources.

As far as the length of observation goes, it is necessary to obtain many independent samples of the scintillation pattern. This is because scintillation is a stochastic process, and we need to know some statistics of the process in order to extract useful information. For fast scintillators like PKS 1257-326, we can get an adequate sample of the scintillation pattern from just one, typical 12-hour observing session. Slower scintillators need to be observed over several days to get the same information. However, there are some unknowns to solve for, such as the bulk velocity of the scattering “screen” in the Galactic interstellar medium (ISM). By observing at intervals spaced over a whole year, we can solve for this velocity – and importantly, we also get two-dimensional information on the scintillation pattern and hence the source structure. As the Earth goes around the Sun, we effectively cut through the scintillation pattern at different angles, as the relative Earth/ISM velocity varies over the course of the year. Our research group dubbed this technique “Earth Orbital Synthesis”, as it is analogous to “Earth rotation synthesis”, a standard technique in radio interferometry.

AM: A recent estimate for the number of stars in the sky estimated that there are ten times more stars in the known universe than grains of sand on Earth. Can you describe why jets and black holes are interesting as difficult-to-resolve objects, even using current and future space telescopes like Hubble and Chandra?

HB: The objects we are studying are some of the most energetic phenomena in the universe. AGN can be up to ~1013 (10 to the power of 13, or 10,000 trillion) times more luminous than the Sun. They are unique “laboratories” for high energy physics. Astrophysicists would like to fully understand the processes involved in forming these tremendously powerful jets close to the central supermassive black hole. Using scintillation to resolve the inner regions of radio jets, we are peering close to the “nozzle” where the jet forms – closer to the action than we can see with any other technique!

AM: In your research paper, you point out that how fast and how strongly the radio signals vary depends on the size and shape of the radio source, the size and structure of the gas clouds, the Earth’s speed and direction as it travels around the Sun, and the speed and direction in which the gas clouds are travelling. Are there built-in assumptions about either the shape of the gas cloud ‘lens’ or the shape of observed object that is accessible with the technique?

The Ring Nebula, although not useful imaging through, has the suggestive look of a far-away telescope lens. 2,000 light years distant in the direction of the constellation, Lyra, the ring is formed in the late stages of the inner star’s life, when it sheds a thick and expanding outer gas layer. Credit: NASA Hubble HST

HB: Rather than think of gas clouds, it is perhaps more accurate to picture a phase-changing “screen” of ionized gas, or plasma, which contains a large number of cells of turbulence. The main assumption which goes into the model is that the size scale of the turbulent fluctuations follows a power-law spectrum – this seems to be a reasonable assumption, from what we know about general properties of turbulence. The turbulence could be preferentially elongated in a particular direction, due to magnetic field structure in the plasma, and in principle we can get some information on this from the observed scintillation pattern. We also get some information from the scintillation pattern about the shape of the observed object, so there are no built-in assumptions about that, although at this stage we can only use quite simple models to describe the source structure.

AM: Are fast scintillators a good target for expanding the capabilities of the method?

HB: Fast scintillators are good simply because they don’t require as much observing time as slower scintillators to get the same amount of information. The first three “intra-hour” scintillators have taught us a lot about the scintillation process and about how to do “Earth Orbit Synthesis”.

AM: Are there additional candidates planned for future observations?

HB: My colleagues and I have recently undertaken a large survey, using the Very Large Array in New Mexico, to look for new scintillating radio sources. The first results of this survey, led by Dr Jim Lovell of the CSIRO’s Australia Telescope National Facility (ATNF), were recently published in the Astronomical Journal (October 2003). Out of 700 flat spectrum radio sources observed, we found more than 100 sources which showed significant variability in intensity over a 3-day period. We are undertaking follow-up observations in order to learn more about source structure on ultra-compact, microarcsecond scales. We will compare these results with other source properties such as emission at other wavelengths (optical, X-ray, gamma-ray), and structure on larger spatial scales, such as that seen with VLBI. In this way we hope to learn more about these very compact, high brightness temperature sources, and also, in the process, learn more about properties of the interstellar medium of our own Galaxy.

It seems that the reason for very fast scintillation in some sources is that the plasma “scattering screen” causing the bulk of the scintillation is quite nearby, within 100 light-years of the solar system. These nearby “screens” are apparently quite rare. Our survey found very few fast scintillators, which was somewhat surprising as two of the three fastest known scintillators were discovered serendipitously. We thought that there might be many more such sources!

Original Source: Astrobiology Magazine