Using Gravitational Lensing to Measure Age and Size of Universe

A graviational lens image of the B1608+656 system. Image courtesy Sherry Suyu of the Argelander Institut für Astronomie in Bonn, Germany. Click on image for larger version.

[/caption]

Handy little tool, this gravitational lensing! Astronomers have used it to measure the shape of stars, look for exoplanets, and measure dark matter in distant galaxies. Now its being used to measure the size and age of the Universe. Researchers say this new use of gravitation lensing provides a very precise way to measure how rapidly the universe is expanding. The measurement determines a value for the Hubble constant, which indicates the size of the universe, and confirms the age of Universe as 13.75 billion years old, within 170 million years. The results also confirm the strength of dark energy, responsible for accelerating the expansion of the universe.

Gravitational lensing occurs when two galaxies happen to aligned with one another along our line of sight in the sky. The gravitational field of the nearer galaxy distorts the image of the more distant galaxy into multiple arc-shaped images. Sometimes this effect even creates a complete ring, known as an “Einstein Ring.”
Researchers at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) used gravitational lensing to measure the distances light traveled from a bright, active galaxy to the earth along different paths. By understanding the time it took to travel along each path and the effective speeds involved, researchers could infer not just how far away the galaxy lies but also the overall scale of the universe and some details of its expansion.

Distinguishing distances in space is difficult. A bright light far away and a dimmer source lying much closer can look like they are at the same distance. A gravitational lens circumvents this problem by providing multiple clues as to the distance light travels. That extra information allows them to determine the size of the universe, often expressed by astrophysicists in terms of a quantity called Hubble’s constant.

“We’ve known for a long time that lensing is capable of making a physical measurement of Hubble’s constant,” KIPAC’s Phil Marshall said. However, gravitational lensing had never before been used in such a precise way. This measurement provides an equally precise measurement of Hubble’s constant as long-established tools such as observation of supernovae and the cosmic microwave background. “Gravitational lensing has come of age as a competitive tool in the astrophysicist’s toolkit,” Marshall said.

When a large nearby object, such as a galaxy, blocks a distant object, such as another galaxy, the light can detour around the blockage. But instead of taking a single path, light can bend around the object in one of two, or four different routes, thus doubling or quadrupling the amount of information scientists receive. As the brightness of the background galaxy nucleus fluctuates, physicists can measure the ebb and flow of light from the four distinct paths, such as in the B1608+656 system that was the subject of this study. Lead author on the study Sherry Suyu, from the University of Bonn, said, “In our case, there were four copies of the source, which appear as a ring of light around the gravitational lens.”

Though researchers do not know when light left its source, they can still compare arrival times. Marshall likens it to four cars taking four different routes between places on opposite sides of a large city, such as Stanford University to Lick Observatory, through or around San Jose. And like automobiles facing traffic snarls, light can encounter delays, too.

“The traffic density in a big city is like the mass density in a lens galaxy,” Marshall said. “If you take a longer route, it need not lead to a longer delay time. Sometimes the shorter distance is actually slower.”

The gravitational lens equations account for all the variables such as distance and density, and provide a better idea of when light left the background galaxy and how far it traveled.

In the past, this method of distance estimation was plagued by errors, but physicists now believe it is comparable with other measurement methods. With this technique, the researchers have come up with a more accurate lensing-based value for Hubble’s constant, and a better estimation of the uncertainty in that constant. By both reducing and understanding the size of error in calculations, they can achieve better estimations on the structure of the lens and the size of the universe.

There are several factors scientists still need to account for in determining distances with lenses. For example, dust in the lens can skew the results. The Hubble Space Telescope has infra-red filters useful for eliminating dust effects. The images also contain information about the number of galaxies lying around the line of vision; these contribute to the lensing effect at a level that needs to be taken into account.

Marshall says several groups are working on extending this research, both by finding new systems and further examining known lenses. Researchers are already aware of more than twenty other astronomical systems suitable for analysis with gravitational lensing.

These results of this study was published in the March 1 issue of The Astrophysical Journal. The researchers used data collected by the NASA/ESA Hubble Space Telescope, and showed the improved precision they provide in combination with the Wilkinson Microwave Anisotropy Probe (WMAP).

Source: SLAC

Quintessence

Quintessence is one idea – hypothesis – of what dark energy is (remember that dark energy is the shorthand expression of the apparent acceleration of the expansion of the universe … or the form of mass-energy which causes this observed acceleration, in cosmological models built with Einstein’s theory of general relativity).

The word quintessence means fifth essence, and is kinda cute … remember Earth, Water, Fire, and Air, the ‘four essences’ of the Ancient Greeks? Well, in modern cosmology, there are also four essences: normal matter, radiation (photons), cold dark matter, and neutrinos (which are hot dark matter!).

Quintessence covers a range of hypotheses (or models); the main difference between quintessence as a (possible) explanation for dark energy and the cosmological constant Λ (which harks back to Einstein and the early years of the 20th century) is that quintessence varies with time (albeit slooowly), and can also vary with location (space). One version of quintessence is phantom energy, in which the energy density increases with time, and leads to a Big Rip end of the universe.

Quintessence, as a scalar field, is not the least bit unusual in physics (the Newtonian gravitational potential field is one example, of a real scalar field; the Higgs field of the Standard Model of particle physics is an example of a complex scalar field); however, it has some difficulties in common with the cosmological constant (in a nutshell, how can it be so small).

Can quintessence be observed; or, rather, can quintessence be distinguished from a cosmological constant? In astronomy, yes … by finding a way to observed (and measure) the acceleration of the universe at widely different times (quintessence and Λ predict different results). Another way might be to observe variations in the fundamental constants (e.g. the fine structure constant) or violations of Einstein’s equivalence principle.

One project seeking to measure the acceleration of the universe more accurately was ESSENCE (“Equation of State: SupErNovae trace Cosmic Expansion”).

In 1999, CERN Courier published a nice summary of cosmology as it was understood then, a year after the discovery of dark energy The quintessence of cosmology (it’s well worth a read, though a lot has happened in the past decade).

Universe Today articles? Yep! For example Will the Universe Expand Forever?, More Evidence for Dark Energy, and Hubble Helps Measure the Pace of Dark Energy.

Astronomy Cast episodes relevant to quintessence include What is the universe expanding into?, and A Universe of Dark Energy.

Source: NASA

New Search for Dark Energy Goes Back in Time

This is a previous optical image of one of the approximately 200 quasars captured in the Baryon Oscillation Spectroscopic Survey (BOSS) "first light" exposure is shown at top, with the BOSS spectrum of the object at bottom. The spectrum allows astronomers to determine the object's redshift. With millions of such spectra, BOSS will measure the geometry of the Universe. Credit: David Hogg, Vaishali Bhardwaj, and Nic Ross of SDSS-III

[/caption]
Baryon acoustic oscillation (BAO) sounds like it could be technobabble from a Star Trek episode. BAO is real, but astronomers are searching for these particle fluctuations to do what seems like science fiction: look back in time to find clues about dark energy. The Baryon Oscillation Spectroscopic Survey(BOSS), a part of the Sloan Digital Sky Survey III (SDSS-III), took its “first light” of astronomical data last month, and will map the expansion history of the Universe.

“Baryon oscillation is a fast-maturing method for measuring dark energy in a way that’s complementary to the proven techniques of supernova cosmology,” said David Schlegel from the Lawrence Berkeley National Laboratory (Berkeley Lab), the Principal Investigator of BOSS. “The data from BOSS will be some of the best ever obtained on the large-scale structure of the Universe.”

BOSS uses the same telescope as the original Sloan Digital Sky Survey — 2.5-meter telescope
at Apache Point Observatory in New Mexico — but equipped with new, specially-built spectrographs to measure the spectra.

Senior Operations Engineer Dan Long loads the first cartridge of the night into the Sloan Digital Sky Survey telescope. The cartridge holds a “plug-plate” at the top which then holds a thousand optical fibers shown in red and blue. These cartridges are locked into the base of the telescope and are changed many times during a night. Photo credit: D. Long
Senior Operations Engineer Dan Long loads the first cartridge of the night into the Sloan Digital Sky Survey telescope. The cartridge holds a “plug-plate” at the top which then holds a thousand optical fibers shown in red and blue. These cartridges are locked into the base of the telescope and are changed many times during a night. Photo credit: D. Long

Baryon oscillations began when pressure waves traveled through the early universe. The same density variations left their mark as the Universe evolved, in the periodic clustering of visible matter in galaxies, quasars, and intergalactic gas, as well as in the clumping of invisible dark matter.

Comparing these scales at different eras makes it possible to trace the details of how the Universe has expanded throughout its history – information that can be used to distinguish among competing theories of dark energy.

“Like sound waves passing through air, the waves push some of the matter closer together as they travel” said Nikhil Padmanabhan, a BOSS researcher who recently moved from Berkeley Lab to Yale University. “In the early universe, these waves were moving at half the speed of light, but when the universe was only a few hundred thousand years old, the universe cooled enough to halt the waves, leaving a signature 500 million light-years in length.”

“We can see these frozen waves in the distribution of galaxies today,” said Daniel Eisenstein of the University of Arizona, the Director of the SDSS-III. “By measuring the length of the baryon oscillations, we can determine how dark energy has affected the expansion history of the universe. That in turn helps us figure out what dark energy could be.”

“Studying baryon oscillations is an exciting method for measuring dark energy in a way that’s complementary to techniques in supernova cosmology,” said Kyle Dawson of the University of Utah, who is leading the commissioning of BOSS. “BOSS’s galaxy measurements will be a revolutionary dataset that will provide rich insights into the universe,” added Martin White of Berkeley Lab, BOSS’s survey
scientist.

On Sept. 14-15, 2009, astronomers used BOSS to measure the spectra of a thousand galaxies and quasars. The goal of BOSS is to measure 1.4 million luminous red galaxies at redshifts up to 0.7 (when the Universe was roughly seven billion years old) and 160,000 quasars at redshifts between 2.0 and 3.0 (when the Universe was only about three billion years old). BOSS will also measure variations in the density of hydrogen gas between the galaxies. The observation program will take five years.

Source: Sloan Digital Sky Survey

Variability in Type 1A Supernovae Has Implications for Studying Dark Energy

A Hubble Space Telescope-Image of Supernova 1994D (SN1994D) in galaxy NGC 4526 (SN 1994D is the bright spot on the lower left). Image Credit:HST

[/caption]

The discovery of dark energy, a mysterious force that is accelerating the expansion of the universe, was based on observations of type 1a supernovae, and these stellar explosions have long been used as “standard candles” for measuring the expansion. But not all type 1A supernovae are created equal. A new study reveals sources of variability in these supernovae, and to accurately probe the nature of dark energy and determine if it is constant or variable over time, scientists will have to find a way to measure cosmic distances with much greater precision than they have in the past.

“As we begin the next generation of cosmology experiments, we will want to use type 1a supernovae as very sensitive measures of distance,” said lead author Daniel Kasen, of a study published in Nature this week. “We know they are not all the same brightness, and we have ways of correcting for that, but we need to know if there are systematic differences that would bias the distance measurements. So this study explored what causes those differences in brightness.”

Kasen and his coauthors–Fritz Röpke of the Max Planck Institute for Astrophysics in Garching, Germany, and Stan Woosley, professor of astronomy and astrophysics at UC Santa Cruz–used supercomputers to run dozens of simulations of type 1a supernovae. The results indicate that much of the diversity observed in these supernovae is due to the chaotic nature of the processes involved and the resulting asymmetry of the explosions.

For the most part, this variability would not produce systematic errors in measurement studies as long as researchers use large numbers of observations and apply the standard corrections, Kasen said. The study did find a small but potentially worrisome effect that could result from systematic differences in the chemical compositions of stars at different times in the history of the universe. But researchers can use the computer models to further characterize this effect and develop corrections for it.

A type 1a supernova occurs when a white dwarf star acquires additional mass by siphoning matter away from a companion star. When it reaches a critical mass–1.4 times the mass of the Sun, packed into an object the size of the Earth–the heat and pressure in the center of the star spark a runaway nuclear fusion reaction, and the white dwarf explodes. Since the initial conditions are about the same in all cases, these supernovae tend to have the same luminosity, and their “light curves” (how the luminosity changes over time) are predictable.

Some are intrinsically brighter than others, but these flare and fade more slowly, and this correlation between the brightness and the width of the light curve allows astronomers to apply a correction to standardize their observations. So astronomers can measure the light curve of a type 1a supernova, calculate its intrinsic brightness, and then determine how far away it is, since the apparent brightness diminishes with distance (just as a candle appears dimmer at a distance than it does up close).

The computer models used to simulate these supernovae in the new study are based on current theoretical understanding of how and where the ignition process begins inside the white dwarf and where it makes the transition from slow-burning combustion to explosive detonation.

The simulations showed that the asymmetry of the explosions is a key factor determining the brightness of type 1a supernovae. “The reason these supernovae are not all the same brightness is closely tied to this breaking of spherical symmetry,” Kasen said.

The dominant source of variability is the synthesis of new elements during the explosions, which is sensitive to differences in the geometry of the first sparks that ignite a thermonuclear runaway in the simmering core of the white dwarf. Nickel-56 is especially important, because the radioactive decay of this unstable isotope creates the afterglow that astronomers are able to observe for months or even years after the explosion.

“The decay of nickel-56 is what powers the light curve. The explosion is over in a matter of seconds, so what we see is the result of how the nickel heats the debris and how the debris radiates light,” Kasen said.

Kasen developed the computer code to simulate this radiative transfer process, using output from the simulated explosions to produce visualizations that can be compared directly to astronomical observations of supernovae.

The good news is that the variability seen in the computer models agrees with observations of type 1a supernovae. “Most importantly, the width and peak luminosity of the light curve are correlated in a way that agrees with what observers have found. So the models are consistent with the observations on which the discovery of dark energy was based,” Woosley said.

Another source of variability is that these asymmetric explosions look different when viewed at different angles. This can account for differences in brightness of as much as 20 percent, Kasen said, but the effect is random and creates scatter in the measurements that can be statistically reduced by observing large numbers of supernovae.

The potential for systematic bias comes primarily from variation in the initial chemical composition of the white dwarf star. Heavier elements are synthesized during supernova explosions, and debris from those explosions is incorporated into new stars. As a result, stars formed recently are likely to contain more heavy elements (higher “metallicity,” in astronomers’ terminology) than stars formed in the distant past.

“That’s the kind of thing we expect to evolve over time, so if you look at distant stars corresponding to much earlier times in the history of the universe, they would tend to have lower metallicity,” Kasen said. “When we calculated the effect of this in our models, we found that the resulting errors in distance measurements would be on the order of 2 percent or less.”

Further studies using computer simulations will enable researchers to characterize the effects of such variations in more detail and limit their impact on future dark-energy experiments, which might require a level of precision that would make errors of 2 percent unacceptable.

Source: EurekAlert