Polar Telescope Casts New Light On Dark Energy And Neutrino Mass

The 10-meter South Pole Telescope in Antarctica at the Amundsen-Scott Station. (Daniel Luong-Van, National Science Foundation)

[/caption]

Located at the southermost point on Earth, the 280-ton, 10-meter-wide South Pole Telescope has helped astronomers unravel the nature of dark energy and zero in on the actual mass of neutrinos — elusive subatomic particles that pervade the Universe and, until very recently, were thought to be entirely without measureable mass.

The NSF-funded South Pole Telescope (SPT) is specifically designed to study the secrets of dark energy, the force that purportedly drives the incessant (and apparently still accelerating) expansion of the Universe. Its millimeter-wave observation abilities allow scientists to study the Cosmic Microwave Background (CMB) which pervades the night sky with the 14-billion-year-old echo of the Big Bang.

Overlaid upon the imprint of the CMB are the silhouettes of distant galaxy clusters — some of the most massive structures to form within the Universe. By locating these clusters and mapping their movements with the SPT, researchers can see how dark energy — and neutrinos — interact with them.

“Neutrinos are amongst the most abundant particles in the universe,” said Bradford Benson, an experimental cosmologist at the University of Chicago’s Kavli Institute for Cosmological Physics. “About one trillion neutrinos pass through us each second, though you would hardly notice them because they rarely interact with ‘normal’ matter.”

If neutrinos were particularly massive, they would have an effect on the large-scale galaxy clusters observed with the SPT. If they had no mass, there would be no effect.

The SPT collaboration team’s results, however, fall somewhere in between.

Even though only 100 of the 500 clusters identified so far have been surveyed, the team has been able to place a reasonably reliable preliminary upper limit on the mass of neutrinos — again, particles that had once been assumed to have no mass.

Previous tests have also assigned a lower limit to the mass of neutrinos, thus narrowing the anticipated mass of the subatomic particles to between 0.05 – 0.28 eV (electron volts). Once the SPT survey is completed, the team expects to have an even more confident result of the particles’ masses.

“With the full SPT data set we will be able to place extremely tight constraints on dark energy and possibly determine the mass of the neutrinos,” said Benson.

“We should be very close to the level of accuracy needed to detect the neutrino masses,” he noted later in an email to Universe Today.

The South Pole Telescope's unique position allows it to watch the night sky for months on end. (NSF)

Such precise measurements would not have been possible without the South Pole Telescope, which has the ability due to its unique location to observe a dark sky for very long periods of time. Antarctica also offers SPT a stable atmosphere, as well as very low levels of water vapor that might otherwise absorb faint millimeter-wavelength signals.

“The South Pole Telescope has proven to be a crown jewel of astrophysical research carried out by NSF in the Antarctic,” said Vladimir Papitashvili, Antarctic Astrophysics and Geospace Sciences program director at NSF’s Office of Polar Programs. “It has produced about two dozen peer-reviewed science publications since the telescope received its ‘first light’ on Feb. 17, 2007. SPT is a very focused, well-managed and amazing project.”

The team’s findings were presented by Bradford Benson at the American Physical Society meeting in Atlanta on April 1.

Read more on the NSF press release here.

GALEX Mission Comes to an End

The GALEX spacecraft before its launch in 2003. Credit: JPL

[/caption]

A mission which helped map the ultraviolet sky and worked to confirm the nature of dark energy is coming to an end. Galaxy Evolution Explorer, or GALEX, was placed in standby mode today after nearly nine years of service and will be decommissioned later this year. With data from the mission, scientists were able to catalog millions of galaxies spanning 10 billion years of cosmic time.

The Galaxy Evolution Explorer launched in April of 2003 on board a Pegasus XL rocket. It completed its prime mission in the fall of 2007, but the mission was extended to continue its census of stars and galaxies.

The variable star Mira. Image credit: Galex

Other mission highlights include the discovery of a gigantic comet-like tail behind a speeding star, finding rings of new stars around old galaxies, exploring “teenager” galaxies, which help to explain how galaxies evolve, and catching a black hole devouring a star.
The mission was part of NASA’s Explorer’s program and was built and managed by the Jet Propulsion Laboratory. Scientists from around the world participated in GALEX studies.

For a complete list of discoveries by GALEX, see this JPL webpage.

Supernova Primo – Out To Far Frontiers

The top image shows part of the Hubble Ultra Deep Field, the region where astronomers were looking for a supernova blast. The white box pinpoints the area where the supernova is later seen. The image combines observations taken in visible and near-infrared light with the Advanced Camera for Surveys and the Wide Field Camera 3. The image at bottom left, taken by the Wide Field Camera 3, is a close-up of the field without the supernova. A new bright object, identified as the supernova, appears in the Wide Field Camera 3 image at bottom right. Credit: NASA, ESA, A. Riess (Space Telescope Science Institute and The Johns Hopkins University), and S. Rodney (The Johns Hopkins University)

[/caption]

Its nickname is SN Primo and it’s the farthest Type Ia supernova to have its distance spectroscopically confirmed. When the progenitor star exploded some 9 billion years ago, Primo sent its brilliant beacon of light across time and space to be captured by the Hubble Space Telescope. It’s all part and parcel of a three-year project dealing specifically with Type Ia supernovae. By splitting its light into constituent colors, researchers can verify its distance by redshift and help astronomers better understand not only the expanding Universe, but the constraints of dark energy.

“For decades, astronomers have harnessed the power of Hubble to unravel the mysteries of the Universe,” said John Grunsfeld, associate administrator for NASA’s Science Mission Directorate in Washington. “This new observation builds upon the revolutionary research using Hubble that won astronomers the 2011 Nobel Prize in Physics, while bringing us a step closer to understanding the nature of dark energy which drives the cosmic acceleration.”

Type Ia supernovae are theorized to have originated from white dwarf stars which have collected an excess of material from their companions and exploded. Because of their remote nature, they have been used to measure great distances with acceptable accuracy. Enter the CANDELS+CLASH Supernova Project… a type of census which utilizes the sharpness and versatility of Hubble’s Wide Field Camera 3 (WFC3) to aid astronomers in the search for supernovae in near- infrared light and verify their distance with spectroscopy. CANDELS is the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey and CLASH is the Cluster Lensing and Supernova Survey with Hubble.

“In our search for supernovae, we had gone as far as we could go in optical light,” said Adam Riess, the project’s lead investigator, at the Space Telescope Science Institute and The Johns Hopkins University in Baltimore, Md. “But it’s only the beginning of what we can do in infrared light. This discovery demonstrates that we can use the Wide Field Camera 3 to search for supernovae in the distant Universe.”

However, discovering a supernova like Primo just doesn’t happen overnight. It took the research team several months of work and a huge amount of near-infrared images to locate the faint signature. After capturing the elusive target in October 2010, it was time to employ the WFC3’s spectrometer to validate SN Primo’s distance and analyze the spectra for confirmation of a Type Ia supernova event. Once verified, the team continued to image SN Primo for the next eight months – collecting data as it faded away. By engaging the Hubble in this type of census, astronomers hope to further their understanding of how such events are created. If they should discover that Type Ia supernova don’t always appear the same, it may lead to a way of categorizing those changes and aid in measuring dark energy. Riess and two other astronomers shared the 2011 Nobel Prize in Physics for discovering dark energy 13 years ago, using Type Ia supernova to plot the Universe’s expansion rate.

“If we look into the early Universe and measure a drop in the number of supernovae, then it could be that it takes a long time to make a Type Ia supernova,” said team member Steve Rodney of The Johns Hopkins University. “Like corn kernels in a pan waiting for the oil to heat up, the stars haven’t had enough time at that epoch to evolve to the point of explosion. However, if supernovae form very quickly, like microwave popcorn, then they will be immediately visible, and we’ll find many of them, even when the Universe was very young. Each supernova is unique, so it’s possible that there are multiple ways to make a supernova.”

Original Story Source: Hubble Site News Release.

Unlocking Cosmology With Type 1a Supernovae

New research shows that some old stars known as white dwarfs might be held up by their rapid spins, and when they slow down, they explode as Type Ia supernovae. Thousands of these "time bombs" could be scattered throughout our Galaxy. In this artist's conception, a supernova explosion is about to obliterate an orbiting Saturn-like planet. Credit: David A. Aguilar (CfA)

[/caption]Let’s face it, cosmologists catch a lot of flack. It’s easy to see why. These are people who routinely publish papers that claim to ever more finely constrain the size of the visible Universe, the rate of its breakneck expansion, and the distance to galaxies that lie closer and closer to the edges of both time and space. Many skeptics scoff at scientists who seem to draw such grand conclusions without being able to directly measure the unbelievable cosmic distances involved. Well, it turns out cosmologists are a creative bunch. Enter our star (ha, ha): the Type 1a Supernova. These stellar fireballs are one of the main tools astronomers use in order to make such fantastic discoveries about our Universe. But how exactly do they do it?

First, let’s talk physics. Type 1a supernovae result from a mismatched marriage gone wrong. When a red giant and white dwarf (or, less commonly, two white dwarfs) become trapped in a gravitational standoff, the denser dwarf star begins to accrete material from its bloated companion. Eventually the white dwarf reaches a critical mass (about 1.4 times that of our own Sun) and the natural pressure exerted by its core can no longer support its weight. A runaway nuclear reaction occurs, resulting in a cataclysmic explosion so large, it can be seen billions of light years away. Since type 1a supernovae always result from the collapse of a white dwarf, and since the white dwarf always becomes unstable at exactly the same mass, astronomers can easily work out the precise luminosity of such an event. And they have. This is great news, because it means that type 1a supernovae can be used as so-called standard candles with which to probe distances in the Universe. After all, if you know how bright something is and you know how bright it appears from where you are, you can easily figure out how far away it must be.

A Type Ia supernova occurs when a white dwarf accretes material from a companion star until it exceeds the Chandrasekhar limit and explodes. By studying these exploding stars, astronomers can measure dark energy and the expansion of the universe. CfA scientists have found a way to correct for small variations in the appearance of these supernovae, so that they become even better standard candles. The key is to sort the supernovae based on their color. Credit: NASA/CXC/M. Weiss

Now here’s where cosmology comes in. Photons naturally lose energy as they travel across the expanding Universe, so the light astronomers observe coming from type 1a supernovae will always be redshifted. The magnitude of that redshift depends on the amount of dark energy that is causing the Universe to expand. It also means that the apparent brightness of a supernova (that is, how bright it looks from Earth) can be monitored to determine how quickly it is receding from our line of view. Observations of the night sky will always be a function of a specific cosmology; but because their distances can be so easily calculated, type 1a supernovae actually allow astronomers to draw a physical map of the expansion of the Universe.

Spotting a type 1a supernova in its early, explosive throes is a rare event; after all, the Universe is a pretty big place. But when it does happen, it offers observers an unparalleled opportunity to dissect the chaos that leads to such a massive explosion. Sometimes astronomers are even lucky enough to catch one right in our cosmic backyard, a feat that occurred last August when Caltech’s Palomar Transit Factory (PTF) detected a type 1a supernova in M101, a galaxy just 25 million light years away. By the way, it isn’t just professionals that got to have all the fun! Amateur and career astronomers alike were able to use this supernova (the romantically named PTF11kly) to probe the inner workings of these precious standard candles. Want to learn more about how you can get in on the action the next time around? Check out UT’s podcast, Getting Started in Amateur Astronomy for more information.

Digging Deeper For Dark Matter

This artist's conception shows a dwarf galaxy seen from the surface of a hypothetical exoplanet. A new study finds that the dark matter in dwarf galaxies is distributed smoothly rather than being clumped at their centers. This contradicts simulations using the standard cosmological model known as lambda-CDM. Credit: David A. Aguilar (CfA)

[/caption]

Dark matter… If it can’t be seen, then how do we know it’s there? If it wasn’t for the effects of gravity, we wouldn’t. We’d have a galaxy filled with runaway stars and no galaxy would exist for long. But how it behaves and how it is distributed in one of the biggest cosmic cryptograms of all. Even with new research, there seems to be more questions than answers!

“After completing this study, we know less about dark matter than we did before,” said lead author Matt Walker, a Hubble Fellow at the Harvard-Smithsonian Center for Astrophysics.

It is generally accepted that our Universe is predominately composed of dark matter and dark energy. Of the former, it is considered to be “cold”, stately exotic particles which coalesce through gravitation. As they evolve, these dark matter “clumps” then attract “normal” matter which forms present day galaxy structures. Through computer modeling, astronomers have simulated this growth process which concludes that galactic centers should be dense with dark matter. However, these models aren’t consistent with findings. By measuring two dwarf galaxies, scientists have found a even distribution instead.

“Our measurements contradict a basic prediction about the structure of cold dark matter in dwarf galaxies. Unless or until theorists can modify that prediction, cold dark matter is inconsistent with our observational data,” Walker stated.

Why study a dwarf instead of a spiral? In this case, the dwarf galaxy is a perfect candidate because of its composition – 99% dark matter and 1% stars. Walker and his co-author Jorge Penarrubia (University of Cambridge, UK) chose two nearby representatives – the Fornax and Sculptor dwarfs – for their study. In comparison to the Milky Way’s estimated 400 billion stars, this pair averages around 10 million instead. This allowed the team to take a comprehensive sample of around 1500 to 2500 stars for location, speed and basic chemical composition. But even at a reduced amount, this type of stellar accounting isn’t exactly easy picking.

“Stars in a dwarf galaxy swarm like bees in a beehive instead of moving in nice, circular orbits like a spiral galaxy,” explained Penarrubia. “That makes it much more challenging to determine the distribution of dark matter.”

What the team found was somewhat surprising. According to the modeling techniques, dark matter should have clumped at the core. Instead they found it evenly distributed over a distance measuring several hundred light years across.

“If a dwarf galaxy were a peach, the standard cosmological model says we should find a dark matter ‘pit’ at the center. Instead, the first two dwarf galaxies we studied are like pitless peaches,” said Penarrubia.

It is hypothesized that interactions between normal and dark matter might be responsible for the distribution, but the computer simulations say it shouldn’t happen to a dwarf. New queries to new findings? Yes. This revelation may suggest that dark matter isn’t always “cold” and that it could be impacted by normal matter in unexpected ways.

Original Story Source: Harvard Smithsonian Center for Astrophysics News Release. For Further Reading: A Method Of Measuring (Slopes Of) the Mass Profiles of Dwarf Spheroidal Galaxies.

Astronomy Without A Telescope – Flat Universe

Caption...

[/caption]

A remarkable finding of the early 21st century, that kind of sits alongside the Nobel prize winning discovery of the universe’s accelerating expansion, is the finding that the universe is geometrically flat. This is a remarkable and unexpected feature of a universe that is expanding – let alone one that is expanding at an accelerated rate – and like the accelerating expansion, it is a key feature of our current standard model of the universe.

It may be that the flatness is just a consequence of the accelerating expansion – but to date this cannot be stated conclusively.

As usual, it’s all about Einstein. The Einstein field equations enable the geometry of the universe to be modelled – and a great variety of different solutions have been developed by different cosmology theorists. Some key solutions are the Friedmann equations, which calculate the shape and likely destiny of the universe, with three possible scenarios:
closed universe – with a contents so dense that the universe’s space-time geometry is drawn in upon itself in a hyper-spherical shape. Ultimately such a universe would be expected to collapse in on itself in a big crunch.
open universe – without sufficient density to draw in space-time, producing an outflung hyperbolic geometry – commonly called a saddle-shape – with a destiny to expand forever.
flat universe – with a ‘just right’ density – although an unclear destiny.

The Friedmann equations were used in twentieth century cosmology to try and determine the ultimate fate of our universe, with few people thinking that the flat scenario would be a likely finding – since a universe might be expected to only stay flat for a short period, before shifting to an open (or closed) state because its expansion (or contraction) would alter the density of its contents.

Matter density was assumed to be key to geometry – and estimates of the matter density of our universe came to around 0.2 atoms per cubic metre, while the relevant part of the Friedmann equations calculated that the critical density required to keep our universe flat would be 5 atoms per cubic metre. Since we could only find 4% of the required critical density, this suggested that we probably lived in an open universe – but then we started coming up with ways to measure the universe’s geometry directly.

There’s a You-Tube of Lawrence Krauss (of Physics of Star Trek fame) explaining how this is done with cosmic microwave background data (from WMAP and earlier experiments) – where the CMB mapped on the sky represents one side of a triangle with you at its opposite apex looking out along its two other sides. The angles of the triangle can then be measured, which will add up to 180 degrees in a flat (Euclidean) universe, more than 180 in a closed universe and less than 180 in an open universe.

These findings, indicating that the universe was remarkably flat, came at the turn of the century around the same time that the 1998 accelerated expansion finding was announced.

Although the contents of the early universe may have just been matter, we now must add dark energy to explain the universe's persistent flatness. Credit: NASA.

So really, it is the universe’s flatness and the estimate that there is only 4% (0.2 atoms per metre) of the matter density required to keep it flat that drives us to call on dark stuff to explain the universe. Indeed we can’t easily call on just matter, light or dark, to account for how our universe sustains its critical density in the face of expansion, let alone accelerated expansion – since whatever it is appears out of nowhere. So, we appeal to dark energy to make up the deficit – without having a clue what it is.

Given how little relevance conventional matter appears to have in our universe’s geometry, one might question the continuing relevance of the Friedmann equations in modern cosmology. There is more recent interest in the De Sitter universe, another Einstein field equation solution which models a universe with no matter content – its expansion and evolution being entirely the result of the cosmological constant.

De Sitter universes, at least on paper, can be made to expand with accelerating expansion and remain spatially flat – much like our universe. From this, it is tempting to suggest that universes naturally stay flat while they undergo accelerated expansion – because that’s what universes do, their contents having little direct influence on their long-term evolution or their large-scale geometry.

But who knows really – we are both literally and metaphorically working in the dark on this.

Further reading:

Krauss: Why the universe probably is flat (video).

Uncloaking Type Ia Supernovae

This three-color composite of a portion of the Subaru Deep Field shows mostly galaxies with a few stars. The inset shows one of the 10 most distant and ancient Type Ia supernovae discovered by the American, Israeli and Japanese team.

Type Ia supernovae… Right now they are one of the most studied – and most mysterious – of all stellar phenomenon. Their origins are sheer conjecture, but explaining them is only half the story. Taking a look back into almost the very beginnings of our Universe is what it’s all about and a team of Japanese, Israeli, and U.S. astronomers have employed the Subaru Telescope to give us the most up-to-date information on these elementally explosive cosmic players.

By understanding the energy release of a Type Ia supernova, astronomers have been able to measure unfathomable distances and speculate on dark energy expansion. It was popular opinion that what caused them was a white dwarf star pulling in so much matter from a companion that it finally exploded, but new research points in a different direction. According to the latest buzz, it may very well be the merging of two white dwarfs.

“The nature of these events themselves is poorly understood, and there is a fierce debate about how these explosions ignite,” said Dovi Poznanski, one of the main authors of the paper and a post-doctoral fellow at the University of California, Berkeley, and Lawrence Berkeley National Laboratory.

“The main goal of this survey was to measure the statistics of a large population of supernovae at a very early time, to get a look at the possible star systems,” he said. “Two white dwarfs merging can explain well what we are seeing.”

Can you imagine the power behind this theory? The Type Ia unleashed a thermonuclear reaction so strong that it is able to be traced back to nearly the beginning of expansion after the Big Bang. By employing the Subaru telescope and its prime focus camera (Suprime-Cam), the team was able to focus their attention four times on a small area named the Subaru Deep Field. In their imaging they caught 150,000 individual galaxies containing a total of 40 Type Ia supernova events. One of the most incredible parts of these findings is that these events happened about five times more frequently in the early Universe. But no worries… Even though the mechanics behind them are still poorly understood, they still serve as “cosmic distance markers”.

“As long as Type Ias explode in the same way, no matter what their origin, their intrinsic brightnesses should be the same, and the distance calibrations would remain unchanged.” says Alex Filippenko, UC Berkeley professor of astronomy.

Original Story Source: University of Berkeley News Release. For Further Reading: National Astronomical Observatory of Japan: Subaru News Release.

Bending The Rules – Exploring Gravitational Redshift

A cluster of galaxies as seen from the Hubble Space Telescope
A cluster of galaxies as seen from the Hubble Space Telescope

[/caption]

Hey. We’re all aware of Einstein’s theories and how gravity affects light. We know it was proved during a total solar eclipse, but what we’ve never realized in observational astronomy is that light just might get bent by other gravitational influences. If it can happen from something as small as a star, then what might occur if you had a huge group of stars? Like a galaxy… Or a group of galaxies!

What’s new in the world of light? Astrophysicists at the Dark Cosmology Centre at the Niels Bohr Institute have now gone around the bend and came up with a method of measuring how outgoing light is affected by the gravity of galaxy clusters. Not only does each individual star and each individual galaxy possess its own gravity, but a galaxy group is held together by gravitational attraction as well. Sure, it stands to reason that gravity is affecting what we see – but there’s even more to it. Redshift…

“It is really wonderful. We live in an era with the technological ability to actually measure such phenomena as cosmological gravitational redshift”, says astrophysicist Radek Wojtak, Dark Cosmology Centre under the Niels Bohr Institute at the University of Copenhagen.

Together with team members Steen Hansen and Jens Hjorth, Wojtak has been collecting light data and measurements from 8,000 galaxy clusters. Their studies have included calculations from mid-placed members to calibrations on those that reside at the periphery.

“We could measure small differences in the redshift of the galaxies and see that the light from galaxies in the middle of a cluster had to ‘crawl’ out through the gravitational field, while it was easier for the light from the outlying galaxies to emerge”, explains Radek Wojtak.

Until now, the gravitational redshift has only been tested with experiments and observations in relation to distances her on Earth and in relation to the solar system. With the new research the theory has been tested on a cosmological scale for the first time by analyzing galaxies in galaxy clusters in the distant universe. It is a grotesquely large scale, which is a factor 1,022 times greater (ten thousand billion billion times larger than the laboratory test). The observed data confirms Einstein’s general theory of relativity. Credit: Dark Cosmology Centre, Niels Bohr Institute

The next step in the equation is to measure the entire galaxy cluster’s total mass to arrive at its gravitational potential. Then, using the general theory of relativity, the gravitational redshift could be determined by galaxy location.

“It turned out that the theoretical calculations of the gravitational redshift based on the general theory of relativity was in complete agreement with the astronomical observations.” explains Wojtak. “Our analysis of observations of galaxy clusters show that the redshift of the light is proportionally offset in relation to the gravitational influence from the galaxy cluster’s gravity. In that way our observations confirm the theory of relativity.”

Of course, this kind of revelation also has other implications… theoretical dark matter just might play a role in gravitational redshift, too. And don’t forget dark energy. All these hypothetical models need to be taken into account. But, for now, we’re looking at the big picture in a different way.

“Now the general theory of relativity has been tested on a cosmological scale and this confirms that the general theory of relativity works and that means that there is a strong indication for the presence of dark energy”, explains Radek Wojtak.

As Walt Whitman once said, “I open the scuttle at night and see the far-sprinkled systems, And all I see multiplied as high as I can cypher edge but the rim of the farther systems. Wider and wider they spread, expanding, always expanding,Outward and outward and forever outward.”

Original Story Source: EurekAlert News Release. Link to Gravitational redshift of galaxies in clusters as predicted by general relativity.

Dark Energy Ignited By Gamma-Ray Bursts?

An artistic image of the explosion of a star leading to a gamma-ray burst. (Source: FUW/Tentaris/Maciej Fro?ow)

[/caption]

Dark energy… We’re still not exactly sure of what it is or where it comes from. Is it possible this mysterious force is what’s driving the expansion of the Universe? A group of astronomers from the universities in Warsaw and Naples, headed by Dr. Ester Piedipalumbo, are taking a closer look at a way to measure this energetic enigma and they’re doing it with one of the most intense sources they can find – gamma-ray bursts.

“We are able to determine the distance of an explosion on the basis of the properties of the radiation emitted during gamma-ray bursts. Given that some of these explosions are related to the most remote objects in space that we know about, we are able, for the first time, to assess the speed of space-time expansion even in the relatively early periods after the Big Bang,” says Prof. Marek Demianski (FUW).

What spawned this new method? In 1998, astronomers were measuring the energy given off by Type Ia supernovae events and realized the expelled forces were consistent. Much like the standard candle model, this release could be used to determine cosmic distances. But there was just one caveat… The more remote the event, the weaker the signature.

While these faint events weren’t lighting up the night, they were lighting up the way science thought about things. Perhaps these Type Ia supernovae were farther away than surmised… and if this were true, perhaps instead of slowing down the expansion of the Universe, maybe it was accelerating! In order to set the Universal model to rights, a new form of mass-energy needed to be introduced – dark energy – and it needed to be twenty times more than what we could perceive. “Overnight, dark energy became, quite literally, the greatest mystery of the Universe,” says Prof. Demianski. In a model put forward by Einstein it’s a property of the cosmological constant – and another model suggests accelerated expansion is caused by some unknown scalar field. “In other words, it is either-or: either space-time expands by itself or is expanded by a scalar physical field inside it,” says Prof. Demianski.

So what’s the point behind the studies? If it is possible to use a gamma-ray burst as a type of standard candle, then astronomers can better assess the density of dark energy, allowing them to further refine models. If it stays monophonic, it belongs to the cosmological constant and is a property of space-time. However, if the acceleration of the Universe is the property of a scalar field, the density of dark energy would differ. “This used to be a problem. In order to assess the changes in the density of dark energy immediately after the Big Bang, one needs to know how to measure the distance to very remote objects. So remote that even Type Ia supernovae connected to them are too faint to be observed,” says Demianski.

Now the real research begins. Gamma-ray bursts needed to have their energy levels measured and to do that accurately meant looking at previous studies which contained verified sources of distance, such as Type Ia supernovae. “We focused on those instances. We knew the distance to the galaxy and we also knew how much energy of the burst reached the Earth. This allowed us to calibrate the burst, that is to say, to calculate the total energy of the explosion,” explains Prof. Demianski. Then the next step was to find statistical dependencies between various properties of the radiation emitted during a gamma-ray burst and the total energy of the explosion. Such relations were discovered. “We cannot provide a physical explanation of why certain properties of gamma-ray bursts are correlated,” points out Prof. Demianski. “But we can say that if registered radiation has such and such properties, then the burst had such and such energy. This allows us to use bursts as standard candles, to measure distances.”

Dr. Ester Piedipalumbo and a team of researchers from the universities in Warsaw and Naples then took up the gauntlet. Despite this fascinating new concept, the reality is that distant gamma-ray bursts are unusual. Even with 95 candidates listed in the Amanti catalogue, there simply wasn’t enough information to pinpoint dark energy. “It is quite a disappointment. But what is important is the fact that we have in our hands a tool for verifying hypotheses about the structure of the Universe. All we need to do now is wait for the next cosmic fireworks,” concludes Prof. Demianski.

Let the games begin…

Original Story Source: University of Warsaw Press Release. For Further Reading: Cosmological models in scalar tensor theories of gravity and observations: a class of general solutions.

Astronomy Without A Telescope – Cosmic Coincidence

caption...

[/caption]

Cosmologists tend not to get all that excited about the universe being 74% dark energy and 26% conventional energy and matter (albeit most of the matter is dark and mysterious as well). Instead they get excited about the fact that the density of dark energy is of the same order of magnitude as that more conventional remainder.

After all, it is quite conceivable that the density of dark energy might be ten, one hundred or even one thousand times more (or less) than the remainder. But nope, it seems it’s about three times as much – which is less than ten and more than one, meaning that the two parts are of the same order of magnitude. And given the various uncertainties and error bars involved, you might even say the density of dark energy and of the more conventional remainder are roughly equivalent. This is what is known as the cosmic coincidence.

To a cosmologist, particularly a philosophically-inclined cosmologist, this coincidence is intriguing and raises all sorts of ideas about why it is so. However, Lineweaver and Egan suggest this is actually the natural experience of any intelligent beings/observers across the universe, since their evolution will always roughly align with the point in time at which the cosmic coincidence is achieved.

A current view of the universe describes its development through the following steps:

Inflationary era – a huge whoomp of volume growth driven by something or other. This is a very quick era lasting from 10-35 to 10-32 of the first second after the Big Bang.
Radiation dominated era – the universe continues expanding, but at a less furious rate. Its contents cools as their density declines. Hadrons begin to cool out from hot quark-gluon soup while dark matter forms out of whatever it forms out of – all steadily adding matter to the universe, although radiation still dominates. This era lasts for maybe 50,000 years.
Matter dominated era – this era begins when the density of matter exceeds the density of radiation and continues through to the release of the cosmic microwave background radiation at 380,000 years, when the first atoms formed – and then continues on for a further 5 billion years. Throughout this era, the energy/matter density of the whole universe continues to gravitationally restrain the rate of expansion of the universe, even though expansion does continue.
Cosmological constant dominated era – from 5 billion years to now (13.7 billion) and presumably for all of hereafter, the energy/matter density of the universe is so diluted that it begins losing its capacity to restrain the expansion of universe – which hence accelerates. Empty voids of space grow ever larger between local clusters of gravitationally-concentrated matter.

And here we are. Lineweaver and Egan propose that it is unlikely that any intelligent life could have evolved in the universe much earlier than now (give or take a couple of billion years) since you need to progressively cycle through the star formation and destruction of Population III, II and then I stars to fill the universe with sufficient ‘metals’ to allow planets with evolutionary ecosystems to develop.

The four eras of the universe mapped over a logarithmic time scale. Note that "Now" occurs as the decline in matter density and the acceleration in cosmic expansion cross over. Credit: Lineweaver and Egan.

So any intelligent observer in this universe is likely to find the same data which underlie the phenomenon we call the cosmological coincidence. Whether any aliens describe their finding as a ‘coincidence’ may depend upon what mathematical model they have developed to formulate the cosmos. It’s unlikely to be the same one we are currently running with – full of baffling ‘dark’ components, notably a mysterious energy that behaves nothing like energy.

It might be enough for them to note that their observations have been taken at a time when the universe’s contents no longer have sufficient density to restrain the universe’s inherent tendency to expand – and so it expands at a steadily increasing rate.

Further reading: Lineweaver and Egan. The Cosmic Coincidence as a Temporal Selection Effect Produced by the Age Distribution of Terrestrial Planets in the Universe (subsequently published in Astrophysical Journal 2007, Vol 671, 853.)