Watch History Live from the Large Hadron Collider

Particle Collider
Today, CERN announced that the LHCb experiment had revealed the existence of two new baryon subatomic particles. Credit: CERN/LHC/GridPP

[/caption]
CERN announced that on March 30 they will attempt to circulate beams in the Large Hadron Collider at 3.5 TeV, the highest energy yet achieved in a particle accelerator. A live webcast will be shown of the event, and will include live footage from the control rooms for the LHC accelerator and all four LHC experiment, as well as a press conference after the first collisions are announced.

“With two beams at 3.5 TeV, we’re on the verge of launching the LHC physics program,” said CERN’s Director for Accelerators and Technology, Steve Myers. “But we’ve still got a lot of work to do before collisions. Just lining the beams up is a challenge in itself: it’s a bit like firing needles across the Atlantic and getting them to collide half way.”

The webcast will be available at a link to be announced, but the tentative schedule of events (subject to change) and more information can be found at this link.

Webcasts will also be available from the control rooms of the four LHC experiments: ALICE, ATLAS, CMS and LHCb. The webcasts will be primarily in English.

Between now and 30 March, the LHC team will be working with 3.5 TeV beams to commission the beam control systems and the systems that protect the particle detectors from stray particles. All these systems must be fully commissioned before collisions can begin.

“The LHC is not a turnkey machine,” said CERN Director General Rolf Heuer.“The machine is working well, but we’re still very much in a commissioning phase and we have to recognize that the first attempt to collide is precisely that. It may take hours or even days to get collisions.”

The last time CERN switched on a major new research machine, the Large Electron Positron collider, LEP, in 1989 it took three days from the first attempt to collide to the first recorded collisions.

The current Large Hadron Collider run began on 20 November 2009, with the first circulating beam at 0.45 TeV. Milestones were quick to follow, with twin circulating beams established by 23 November and a world record beam energy of 1.18 TeV being set on 30 November. By the time the LHC switched off for 2009 on 16 December, another record had been set with collisions recorded at 2.36 TeV and significant quantities of data recorded. Over the 2009 part of the run, each of the LHC’s four major experiments, ALICE, ATLAS, CMS and LHCb recorded over a million particle collisions, which were distributed smoothly for analysis around the world on the LHC computing grid. The first physics papers were soon to follow. After a short technical stop, beams were again circulating on 28 February 2010, and the first acceleration to 3.5 TeV was on 19 March.

Once 7 TeV collisions have been established, the plan is to run continuously for a period of 18-24 months, with a short technical stop at the end of 2010. This will bring enough data across all the potential discovery areas to firmly establish the LHC as the world’s foremost facility for high-energy particle physics.

Source: CERN

This is Getting Boring: General Relativity Passes Yet another Big Test!

Princeton University scientists (from left) Reinabelle Reyes, James Gunn and Rachel Mandelbaum led a team that analyzed more than 70,000 galaxies and demonstrated that the universe - at least up to a distance of 3.5 billion light years from Earth - plays by the rules set out by Einstein in his theory of general relativity. (Photo: Brian Wilson)

[/caption]
Published in 1915, Einstein’s theory of general relativity (GR) passed its first big test just a few years later, when the predicted gravitational deflection of light passing near the Sun was observed during the 1919 solar eclipse.

In 1960, GR passed its first big test in a lab, here on Earth; the Pound-Rebka experiment. And over the nine decades since its publication, GR has passed test after test after test, always with flying colors (check out this review for an excellent summary).

But the tests have always been within the solar system, or otherwise indirect.

Now a team led by Princeton University scientists has tested GR to see if it holds true at cosmic scales. And, after two years of analyzing astronomical data, the scientists have concluded that Einstein’s theory works as well in vast distances as in more local regions of space.

A partial map of the distribution of galaxies in the SDSS, going out to a distance of 7 billion light years. The amount of galaxy clustering that we observe today is a signature of how gravity acted over cosmic time, and allows as to test whether general relativity holds over these scales. (M. Blanton, SDSS)

The scientists’ analysis of more than 70,000 galaxies demonstrates that the universe – at least up to a distance of 3.5 billion light years from Earth – plays by the rules set out by Einstein in his famous theory. While GR has been accepted by the scientific community for over nine decades, until now no one had tested the theory so thoroughly and robustly at distances and scales that go way beyond the solar system.

Reinabelle Reyes, a Princeton graduate student in the Department of Astrophysical Sciences, along with co-authors Rachel Mandelbaum, an associate research scholar, and James Gunn, the Eugene Higgins Professor of Astronomy, outlined their assessment in the March 11 edition of Nature.

Other scientists collaborating on the paper include Tobias Baldauf, Lucas Lombriser and Robert Smith of the University of Zurich and Uros Seljak of the University of California-Berkeley.

The results are important, they said, because they shore up current theories explaining the shape and direction of the universe, including ideas about dark energy, and dispel some hints from other recent experiments that general relativity may be wrong.

“All of our ideas in astronomy are based on this really enormous extrapolation, so anything we can do to see whether this is right or not on these scales is just enormously important,” Gunn said. “It adds another brick to the foundation that underlies what we do.”

GR is one, of two, core theories underlying all of contemporary astrophysics and cosmology (the other is the Standard Model of particle physics, a quantum theory); it explains everything from black holes to the Big Bang.

In recent years, several alternatives to general relativity have been proposed. These modified theories of gravity depart from general relativity on large scales to circumvent the need for dark energy, dark matter, or both. But because these theories were designed to match the predictions of general relativity about the expansion history of the universe, a factor that is central to current cosmological work, it has become crucial to know which theory is correct, or at least represents reality as best as can be approximated.

“We knew we needed to look at the large-scale structure of the universe and the growth of smaller structures composing it over time to find out,” Reyes said. The team used data from the Sloan Digital Sky Survey (SDSS), a long-term, multi-institution telescope project mapping the sky to determine the position and brightness of several hundred million galaxies and quasars.

By calculating the clustering of these galaxies, which stretch nearly one-third of the way to the edge of the universe, and analyzing their velocities and distortion from intervening material – due to weak lensing, primarily by dark matter – the researchers have shown that Einstein’s theory explains the nearby universe better than alternative theories of gravity.

Some of the 70,000 luminous galaxies in SDSS analyzed (Image: SDSS Collaboration)

The Princeton scientists studied the effects of gravity on the SDSS galaxies and clusters of galaxies over long periods of time. They observed how this fundamental force drives galaxies to clump into larger collections of galaxies and how it shapes the expansion of the universe.

Critically, because relativity calls for the curvature of space to be equal to the curvature of time, the researchers could calculate whether light was influenced in equal amounts by both, as it should be if general relativity holds true.

“This is the first time this test was carried out at all, so it’s a proof of concept,” Mandelbaum said. “There are other astronomical surveys planned for the next few years. Now that we know this test works, we will be able to use it with better data that will be available soon to more tightly constrain the theory of gravity.”

Firming up the predictive powers of GR can help scientists better understand whether current models of the universe make sense, the scientists said.

“Any test we can do in building our confidence in applying these very beautiful theoretical things but which have not been tested on these scales is very important,” Gunn said. “It certainly helps when you are trying to do complicated things to understand fundamentals. And this is a very, very, very fundamental thing.”

“The nice thing about going to the cosmological scale is that we can test any full, alternative theory of gravity, because it should predict the things we observe,” said co-author Uros Seljak, a professor of physics and of astronomy at UC Berkeley and a faculty scientist at Lawrence Berkeley National Laboratory who is currently on leave at the Institute of Theoretical Physics at the University of Zurich. “Those alternative theories that do not require dark matter fail these tests.”

Sources: “Princeton scientists say Einstein’s theory applies beyond the solar system” (Princeton University), “Study validates general relativity on cosmic scale, existence of dark matter” (University of California Berkeley), “Confirmation of general relativity on large scales from weak lensing and galaxy velocities” (Nature, arXiv preprint)

World-wide Campaign Sheds New Light on Nature’s “LHC”

Recent observations of blazar jets require researchers to look deeper into whether current theories about jet formation and motion require refinement. This simulation, courtesy of Jonathan McKinney (KIPAC), shows a black hole pulling in nearby matter (yellow) and spraying energy back out into the universe in a jet (blue and red) that is held together by magnetic field lines (green).

[/caption]
In a manner somewhat like the formation of an alliance to defeat Darth Vader’s Death Star, more than a decade ago astronomers formed the Whole Earth Blazar Telescope consortium to understand Nature’s Death Ray Gun (a.k.a. blazars). And contrary to its at-death’s-door sounding name, the GASP has proved crucial to unraveling the secrets of how Nature’s “LHC” works.

“As the universe’s biggest accelerators, blazar jets are important to understand,” said Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) Research Fellow Masaaki Hayashida, corresponding author on the recent paper presenting the new results with KIPAC Astrophysicist Greg Madejski. “But how they are produced and how they are structured is not well understood. We’re still looking to understand the basics.”

Blazars dominate the gamma-ray sky, discrete spots on the dark backdrop of the universe. As nearby matter falls into the supermassive black hole at the center of a blazar, “feeding” the black hole, it sprays some of this energy back out into the universe as a jet of particles.

Researchers had previously theorized that such jets are held together by strong magnetic field tendrils, while the jet’s light is created by particles spiraling around these wisp-thin magnetic field “lines”.

Yet, until now, the details have been relatively poorly understood. The recent study upsets the prevailing understanding of the jet’s structure, revealing new insight into these mysterious yet mighty beasts.

“This work is a significant step toward understanding the physics of these jets,” said KIPAC Director Roger Blandford. “It’s this type of observation that is going to make it possible for us to figure out their anatomy.”

Over a full year of observations, the researchers focused on one particular blazar jet, 3C279, located in the constellation Virgo, monitoring it in many different wavebands: gamma-ray, X-ray, optical, infrared and radio. Blazars flicker continuously, and researchers expected continual changes in all wavebands. Midway through the year, however, researchers observed a spectacular change in the jet’s optical and gamma-ray emission: a 20-day-long flare in gamma rays was accompanied by a dramatic change in the jet’s optical light.

Although most optical light is unpolarized – consisting of light with an equal mix of all polarizations – the extreme bending of energetic particles around a magnetic field line can polarize light. During the 20-day gamma-ray flare, optical light from the jet changed its polarization. This temporal connection between changes in the gamma-ray light and changes in the optical polarization suggests that light in both wavebands is created in the same part of the jet; during those 20 days, something in the local environment changed to cause both the optical and gamma-ray light to vary.

“We have a fairly good idea of where in the jet optical light is created; now that we know the gamma rays and optical light are created in the same place, we can for the first time determine where the gamma rays come from,” said Hayashida.

This knowledge has far-reaching implications about how a supermassive black hole produces polar jets. The great majority of energy released in a jet escapes in the form of gamma rays, and researchers previously thought that all of this energy must be released near the black hole, close to where the matter flowing into the black hole gives up its energy in the first place. Yet the new results suggest that – like optical light – the gamma rays are emitted relatively far from the black hole. This, Hayashida and Madejski said, in turn suggests that the magnetic field lines must somehow help the energy travel far from the black hole before it is released in the form of gamma rays.

“What we found was very different from what we were expecting,” said Madejski. “The data suggest that gamma rays are produced not one or two light days from the black hole [as was expected] but closer to one light year. That’s surprising.”

In addition to revealing where in the jet light is produced, the gradual change of the optical light’s polarization also reveals something unexpected about the overall shape of the jet: the jet appears to curve as it travels away from the black hole.

“At one point during a gamma-ray flare, the polarization rotated about 180 degrees as the intensity of the light changed,” said Hayashida. “This suggests that the whole jet curves.”

This new understanding of the inner workings and construction of a blazar jet requires a new working model of the jet’s structure, one in which the jet curves dramatically and the most energetic light originates far from the black hole. This, Madejski said, is where theorists come in. “Our study poses a very important challenge to theorists: how would you construct a jet that could potentially be carrying energy so far from the black hole? And how could we then detect that? Taking the magnetic field lines into account is not simple. Related calculations are difficult to do analytically, and must be solved with extremely complex numerical schemes.”

Theorist Jonathan McKinney, a Stanford University Einstein Fellow and expert on the formation of magnetized jets, agrees that the results pose as many questions as they answer. “There’s been a long-time controversy about these jets – about exactly where the gamma-ray emission is coming from. This work constrains the types of jet models that are possible,” said McKinney, who is unassociated with the recent study. “From a theoretician’s point of view, I’m excited because it means we need to rethink our models.”

As theorists consider how the new observations fit models of how jets work, Hayashida, Madejski and other members of the research team will continue to gather more data. “There’s a clear need to conduct such observations across all types of light to understand this better,” said Madejski. “It takes a massive amount of coordination to accomplish this type of study, which included more than 250 scientists and data from about 20 telescopes. But it’s worth it.”

With this and future multi-wavelength studies, theorists will have new insight with which to craft models of how the universe’s biggest accelerators work. Darth Vader has been denied all access to these research results.

Sources: DOE/SLAC National Accelerator Laboratory Press Release, a paper in the 18 February, 2010 issue of Nature.

Small Asteroids, Bread Flour, and a Dutch Physicist’s 150-year Old Theory

Itokawa, a dusty asteroid (Credit: JAXA)

[/caption]
No, it’s not the Universe Puzzle No. 3; rather, it’s an intriguing result from recent work into the strange shapes and composition of small asteroids.

Images sent back from space missions suggest that smaller asteroids are not pristine chunks of rock, but are instead covered in rubble that ranges in size from meter-sized boulders to flour-like dust. Indeed some asteroids appear to be up to 50% empty space, suggesting that they could be collections of rubble with no solid core.

But how do these asteroids form and evolve? And if we ever have to deflect one, to avoid the fate of the dinosaurs, how to do so without breaking it up, and making the danger far greater?

Johannes Diderik van der Waals (1837-1923), with a little help from Daniel Scheeres, Michael Swift, and colleagues, to the rescue.

Rocks and dust on asteroid Eros (Credit: NASA)

Asteroids tend to spin rapidly on their axes – and gravity at the surface of smaller bodies can be one thousandth or even one millionth of that on Earth. As a result scientists are left wondering how the rubble clings on to the surface. “The few images that we have of asteroid surfaces are a challenge to understand using traditional geophysics,” University of Colorado’s Scheeres explained.

To get to the bottom of this mystery, the team – Daniel Scheeres, colleagues at the University of Colorado, and Michael Swift at the University of Nottingham – made a thorough study of the relevant forces involved in binding rubble to an asteroid. The formation of small bodies in space involves gravity and cohesion – the latter being the attraction between molecules at the surface of materials. While gravity is well understood, the nature of the cohesive forces at work in the rubble and their relative strengths is much less well known.

The team assumed that the cohesive forces between grains are similar to that found in “cohesive powders” – which include bread flour – because such powders resemble what has been seen on asteroid surfaces. To gauge the significance of these forces, the team considered their strength relative to the gravitational forces present on a small asteroid where gravity at the surface is about one millionth that on Earth. The team found that gravity is an ineffective binding force for rocks observed on smaller asteroids. Electrostatic attraction was also negligible, other than where a portion of the asteroid this is illuminated by the Sun comes into contact with a dark portion.

Fast backward to the mid-19th century, a time when the existence of molecules was controversial, and inter-molecular forces pure science fiction (except, of course, that there was no such thing then). Van der Waals’ doctoral thesis provided a powerful explanation for the transition between gaseous and liquid phases, in terms of weak forces between the constituent molecules, which he assumed have a finite size (more than half a century was to pass before these forces were understood, quantitatively, in terms of quantum mechanics and atomic theory).

Van der Waals forces – weak electrostatic attractions between adjacent atoms or molecules that arise from fluctuations in the positions of their electrons – seem to do the trick for particles that are less than about one meter in size. The size of the van der Waals force is proportional to the contact surface area of a particle – unlike gravity, which is proportional to the mass (and therefore volume) of the particle. As a result, the relative strength of van der Waals compared with gravity increases as the particle gets smaller.

This could explain, for example, recent observations by Scheeres and colleagues that small asteroids are covered in fine dust – material that some scientists thought would be driven away by solar radiation. The research can also have implications on how asteroids respond to the “YORP effect” – the increase of the angular velocity of small asteroids by the absorption of solar radiation. As the bodies spin faster, this recent work suggests that they would expel larger rocks while retaining smaller ones. If such an asteroid were a collection of rubble, the result could be an aggregate of smaller particles held together by van der Waals forces.

Asteroid expert Keith Holsapple of the University of Washington is impressed that not only has Scheeres’ team estimated the forces in play on an asteroid, it has also looked at how these vary with asteroid and particle size. “This is a very important paper that addresses a key issue in the mechanics of the small bodies of the solar system and particle mechanics at low gravity,” he said.

Scheeres noted that testing this theory requires a space mission to determine the mechanical and strength properties of an asteroid’s surface. “We are developing such a proposal now,” he said.

Source: Physics World. “Scaling forces to asteroid surfaces: The role of cohesion” is a preprint by Scheeres, et al. (arXiv:1002.2478), submitted for publication in Icarus.

ESA’s Tough Choice: Dark Matter, Sun Close Flyby, Exoplanets (Pick Two)

Thales Alenia Space and EADS Astrium concepts for Euclid (ESA)


Key questions relevant to fundamental physics and cosmology, namely the nature of the mysterious dark energy and dark matter (Euclid); the frequency of exoplanets around other stars, including Earth-analogs (PLATO); take the closest look at our Sun yet possible, approaching to just 62 solar radii (Solar Orbiter) … but only two! What would be your picks?

These three mission concepts have been chosen by the European Space Agency’s Science Programme Committee (SPC) as candidates for two medium-class missions to be launched no earlier than 2017. They now enter the definition phase, the next step required before the final decision is taken as to which missions are implemented.

These three missions are the finalists from 52 proposals that were either made or carried forward in 2007. They were whittled down to just six mission proposals in 2008 and sent for industrial assessment. Now that the reports from those studies are in, the missions have been pared down again. “It was a very difficult selection process. All the missions contained very strong science cases,” says Lennart Nordh, Swedish National Space Board and chair of the SPC.

And the tough decisions are not yet over. Only two missions out of three of them: Euclid, PLATO and Solar Orbiter, can be selected for the M-class launch slots. All three missions present challenges that will have to be resolved at the definition phase. A specific challenge, of which the SPC was conscious, is the ability of these missions to fit within the available budget. The final decision about which missions to implement will be taken after the definition activities are completed, which is foreseen to be in mid-2011.
[/caption]
Euclid is an ESA mission to map the geometry of the dark Universe. The mission would investigate the distance-redshift relationship and the evolution of cosmic structures. It would achieve this by measuring shapes and redshifts of galaxies and clusters of galaxies out to redshifts ~2, or equivalently to a look-back time of 10 billion years. It would therefore cover the entire period over which dark energy played a significant role in accelerating the expansion.

By approaching as close as 62 solar radii, Solar Orbiter would view the solar atmosphere with high spatial resolution and combine this with measurements made in-situ. Over the extended mission periods Solar Orbiter would deliver images and data that would cover the polar regions and the side of the Sun not visible from Earth. Solar Orbiter would coordinate its scientific mission with NASA’s Solar Probe Plus within the joint HELEX program (Heliophysics Explorers) to maximize their combined science return.

Thales Alenis Space concept, from assessment phase (ESA)

PLATO (PLAnetary Transit and Oscillations of stars) would discover and characterize a large number of close-by exoplanetary systems, with a precision in the determination of mass and radius of 1%.

In addition, the SPC has decided to consider at its next meeting in June, whether to also select a European contribution to the SPICA mission.

SPICA would be an infrared space telescope led by the Japanese Space Agency JAXA. It would provide ‘missing-link’ infrared coverage in the region of the spectrum between that seen by the ESA-NASA Webb telescope and the ground-based ALMA telescope. SPICA would focus on the conditions for planet formation and distant young galaxies.

“These missions continue the European commitment to world-class space science,” says David Southwood, ESA Director of Science and Robotic Exploration, “They demonstrate that ESA’s Cosmic Vision programme is still clearly focused on addressing the most important space science.”

Source: ESA chooses three scientific missions for further study

Does Zonal Swishing Play a Part in Earth’s Magnetic Field Reversals?

Zonal swishing in the Earth's outer core (Credit: Akira Kageyama, Kobe University)

[/caption]
Why does the Earth’s magnetic field ‘flip’ every million years or so? Whatever the reason, or reasons, the way the liquid iron of the Earth’s outer core flows – its currents, its structure, its long-term cycles – is important, either as cause, effect, or a bit of both.

The main component of the Earth’s field – which defines the magnetic poles – is a dipole generated by the convection of molten nickel-iron in the outer core (the inner core is solid, so its role is secondary; remember that the Earth’s core is well above the Curie temperature, so the iron is not ferromagnetic).

But what about the fine structure? Does the outer core have the equivalent of the Earth’s atmosphere’s jet streams, for example? Recent research by a team of geophysicists in Japan sheds some light on these questions, and so hints at what causes magnetic pole flips.

About the image: This image shows how an imaginary particle suspended in the liquid iron outer core of the Earth tends to flow in zones even when conditions in the geodynamo are varied. The colors represent the vorticity or “amount of rotation” that this particle experiences, where red signifies positive (east-west) flow and blue signifies negative (west-east) flow. Left to right shows how the flow responds to increasing Rayleigh numbers, which is associated with flow driven by buoyancy. Top to bottom shows how flow responds to increasing angular velocities of the whole geodynamo system.

The jet stream winds that circle the globe and those in the atmospheres of the gas giants (Jupiter, Saturn, etc) are examples of zonal flows. “A common feature of these zonal flows is that they are spontaneously generated in turbulent systems. Because the Earth’s outer core is believed to be in a turbulent state, it is possible that there is zonal flow in the liquid iron of the outer core,” Akira Kageyama at Kobe University and colleagues say, in their recent Nature paper. The team found a secondary flow pattern when they modeled the geodynamo – which generates the Earth’s magnetic field – to build a more detailed picture of convection in the Earth’s outer core, a secondary flow pattern consisting of inner sheet-like radial plumes, surrounded by westward cylindrical zonal flow.

This work was carried out using the Earth Simulator supercomputer, based in Japan, which offered sufficient spatial resolution to determine these secondary effects. Kageyama and his team also confirmed, using a numerical model, that this dual-convection structure can co-exist with the dominant convection that generates the north and south poles; this is a critical consistency check on their models, “We numerically confirm that the dual-convection structure with such a zonal flow is stable under a strong, self-generated dipole magnetic field,” they write.

This kind of zonal flow in the outer core has not been seen in geodynamo models before, due largely to lack of sufficient resolution in earlier models. What role these zonal flows play in the reversal of the Earth’s magnetic field is one area of research that Kageyama and his team’s results that will now be able to be pursued.

Sources: Physics World, based on a paper in the 11 February, 2010 issue of Nature. Earth Simulator homepage

Einstein’s General Relativity Tested Again, Much More Stringently

Einstein and Relativity
Albert Einstein

[/caption]
This time it was the gravitational redshift part of General Relativity; and the stringency? An astonishing better-than-one-part-in-100-million!

How did Steven Chu (US Secretary of Energy, though this work was done while he was at the University of California Berkeley), Holger Müler (Berkeley), and Achim Peters (Humboldt University in Berlin) beat the previous best gravitational redshift test (in 1976, using two atomic clocks – one on the surface of the Earth and the other sent up to an altitude of 10,000 km in a rocket) by a staggering 10,000 times?

By exploited wave-particle duality and superposition within an atom interferometer!

Cesium atom interferometer test of gravitational redshift (Courtesy Nature)

About this figure: Schematic of how the atom interferometer operates. The trajectories of the two atoms are plotted as functions of time. The atoms are accelerating due to gravity and the oscillatory lines depict the phase accumulation of the matter waves. Arrows indicate the times of the three laser pulses. (Courtesy: Nature).

Gravitational redshift is an inevitable consequence of the equivalence principle that underlies general relativity. The equivalence principle states that the local effects of gravity are the same as those of being in an accelerated frame of reference. So the downward force felt by someone in a lift could be equally due to an upward acceleration of the lift or to gravity. Pulses of light sent upwards from a clock on the lift floor will be redshifted when the lift is accelerating upwards, meaning that this clock will appear to tick more slowly when its flashes are compared at the ceiling of the lift to another clock. Because there is no way to tell gravity and acceleration apart, the same will hold true in a gravitational field; in other words the greater the gravitational pull experienced by a clock, or the closer it is to a massive body, the more slowly it will tick.

Confirmation of this effect supports the idea that gravity is geometry – a manifestation of spacetime curvature – because the flow of time is no longer constant throughout the universe but varies according to the distribution of massive bodies. Exploring the idea of spacetime curvature is important when distinguishing between different theories of quantum gravity because there are some versions of string theory in which matter can respond to something other than the geometry of spacetime.

Gravitational redshift, however, as a manifestation of local position invariance (the idea that the outcome of any non-gravitational experiment is independent of where and when in the universe it is carried out) is the least well confirmed of the three types of experiment that support the equivalence principle. The other two – the universality of freefall and local Lorentz invariance – have been verified with precisions of 10-13 or better, whereas gravitational redshift had previously been confirmed only to a precision of 7×10-5.

In 1997 Peters used laser trapping techniques developed by Chu to capture cesium atoms and cool them to a few millionths of a degree K (in order to reduce their velocity as much as possible), and then used a vertical laser beam to impart an upward kick to the atoms in order to measure gravitational freefall.

Now, Chu and Müller have re-interpreted the results of that experiment to give a measurement of the gravitational redshift.

In the experiment each of the atoms was exposed to three laser pulses. The first pulse placed the atom into a superposition of two equally probable states – either leaving it alone to decelerate and then fall back down to Earth under gravity’s pull, or giving it an extra kick so that it reached a greater height before descending. A second pulse was then applied at just the right moment so as to push the atom in the second state back faster toward Earth, causing the two superposition states to meet on the way down. At this point the third pulse measured the interference between these two states brought about by the atom’s existence as a wave, the idea being that any difference in gravitational redshift as experienced by the two states existing at difference heights above the Earth’s surface would be manifest as a change in the relative phase of the two states.

The virtue of this approach is the extremely high frequency of a cesium atom’s de Broglie wave – some 3×1025Hz. Although during the 0.3 s of freefall the matter waves on the higher trajectory experienced an elapsed time of just 2×10-20s more than the waves on the lower trajectory did, the enormous frequency of their oscillation, combined with the ability to measure amplitude differences of just one part in 1000, meant that the researchers were able to confirm gravitational redshift to a precision of 7×10-9.

As Müller puts it, “If the time of freefall was extended to the age of the universe – 14 billion years – the time difference between the upper and lower routes would be a mere one thousandth of a second, and the accuracy of the measurement would be 60 ps, the time it takes for light to travel about a centimetre.”

Müller hopes to further improve the precision of the redshift measurements by increasing the distance between the two superposition states of the cesium atoms. The distance achieved in the current research was a mere 0.1 mm, but, he says, by increasing this to 1 m it should be possible to detect gravitational waves, predicted by general relativity but not yet directly observed.

Sources: Physics World; the paper is in the 18 February, 2010 issue of Nature

Universe to WMAP: ΛCDM Rules, OK?

Temperature and polarization around hot and cold spots (Credit: NASA / WMAP Science Team)

[/caption]
The Wilkinson Microwave Anisotropy Probe (WMAP) science team has finished analyzing seven full years’ of data from the little probe that could, and once again it seems we can sum up the universe in six parameters and a model.

Using the seven-year WMAP data, together with recent results on the large-scale distribution of galaxies, and an updated estimate of the Hubble constant, the present-day age of the universe is 13.75 (plus-or-minus 0.11) billion years, dark energy comprises 72.8% (+/- 1.5%) of the universe’s mass-energy, baryons 4.56% (+/- 0.16%), non-baryonic matter (CDM) 22.7% (+/- 1.4%), and the redshift of reionization is 10.4 (+/- 1.2).

In addition, the team report several new cosmological constraints – primordial abundance of helium (this rules out various alternative, ‘cold big bang’ models), and an estimate of a parameter which describes a feature of density fluctuations in the very early universe sufficiently precisely to rule out a whole class of inflation models (the Harrison-Zel’dovich-Peebles spectrum), to take just two – as well as tighter limits on many others (number of neutrino species, mass of the neutrino, parity violations, axion dark matter, …).

The best eye-candy from the team’s six papers are the stacked temperature and polarization maps for hot and cold spots; if these spots are due to sound waves in matter frozen in when radiation (photons) and baryons parted company – the cosmic microwave background (CMB) encodes all the details of this separation – then there should be nicely circular rings, of rather exact sizes, around the spots. Further, the polarization directions should switch from radial to tangential, from the center out (for cold spots; vice versa for hot spots).

And that’s just what the team found!

Concerning Dark Energy. Since the Five-Year WMAP results were published, several independent studies with direct relevance to cosmology have been published. The WMAP team took those from observations of the baryon acoustic oscillations (BAO) in the distribution of galaxies; of Cepheids, supernovae, and a water maser in local galaxies; of time-delay in a lensed quasar system; and of high redshift supernovae, and combined them to reduce the nooks and crannies in parameter space in which non-cosmological constant varieties of dark energy could be hiding. At least some alternative kinds of dark energy may still be possible, but for now Λ, the cosmological constant, rules.

Concerning Inflation. Very, very, very early in the life of the universe – so the theory of cosmic inflation goes – there was a period of dramatic expansion, and the tiny quantum fluctuations before inflation became the giant cosmic structures we see today. “Inflation predicts that the statistical distribution of primordial fluctuations is nearly a Gaussian distribution with random phases. Measuring deviations from a Gaussian distribution,” the team reports, “is a powerful test of inflation, as how precisely the distribution is (non-) Gaussian depends on the detailed physics of inflation.” While the limits on non-Gaussianity (as it is called), from analysis of the WMAP data, only weakly constrain various models of inflation, they do leave almost nowhere for cosmological models without inflation to hide.

Concerning ‘cosmic shadows’ (the Sunyaev-Zel’dovich (SZ) effect). While many researchers have looked for cosmic shadows in WMAP data before – perhaps the best known to the general public is the 2006 Lieu, Mittaz, and Zhang paper (the SZ effect: hot electrons in the plasma which pervades rich clusters of galaxies interact with CMB photons, via inverse Compton scattering) – the WMAP team’s recent analysis is their first to investigate this effect. They detect the SZ effect directly in the nearest rich cluster (Coma; Virgo is behind the Milky Way foreground), and also statistically by correlation with the location of some 700 relatively nearby rich clusters. While the WMAP team’s finding is consistent with data from x-ray observations, it is inconsistent with theoretical models. Back to the drawing board for astrophysicists studying galaxy clusters.

Seven Year Microwave Sky (Credit: NASA/WMAP Science Team)

I’ll wrap up by quoting Komatsu et al. “The standard ΛCDM cosmological model continues to be an exquisite fit to the existing data.”

Primary source: Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation (arXiv:1001.4738). The five other Seven-Year WMAP papers are: Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Are There Cosmic Microwave Background Anomalies? (arXiv:1001.4758), Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Planets and Celestial Calibration Sources (arXiv:1001.4731), Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results (arXiv:1001.4744), Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Power Spectra and WMAP-Derived Parameters (arXiv:1001.4635), and Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Galactic Foreground Emission (arXiv:1001.4555). Also check out the official WMAP website.

What is Schrodinger’s Cat?

Schrodinger’s cat is named after Erwin Schrödinger, a physicist from Austria who made substantial contributions to the development of quantum mechanics in the 1930s (he won a Nobel Prize for some of this work, in 1933). Apart from the poor cat (more later), his name is forever associated with quantum mechanics via the Schrödinger equation, which every physics student has to grapple with.

Schrodinger’s cat is actually a thought experiment (Gedankenexperiment) – and the cat may not have been Erwin’s, but his wife’s, or one of his lovers’ (Erwin had an unconventional lifestyle) – designed to test a really weird implication of the physics he and other physicists was developing at the time. It was motivated by a 1935 paper by Einstein, Podolsky, and Rosen; this paper is the source of the famous EPR paradox.

In the thought experiment, Schrodinger’s cat is placed inside a box containing a piece of radioactive material, and a Geiger counter wired to a flask of poison in such a way that if the Geiger counter detects a decay, then the flask is smashed, the poison gas released, and the cat dies (fun piece of trivia: an animal rights group accused physicists of cruelty to animals, based on a distorted version of this thought experiment! though maybe that’s just an urban legend). The half-life of the radioactive material is an hour, so after an hour, there is a 50% probability that the cat is dead, and an equal probability that it is alive. In quantum mechanics, these two states are superposed (a technical term), and the cat is neither dead nor alive, or half-dead and half-alive, or … which is really, really weird.

Now the theory – quantum mechanics – has been tested perhaps more thoroughly than any other theory in physics, and it seems to describe how the universe behaves with extraordinary accuracy. And the theory says that when the box is opened – to see if the cat is dead, alive, half-dead and half-alive, or anything else – the wavefunction (describing the cat, Geiger counter, etc) collapses, or decoheres, or that the states are no longer entangled (all technical terms), and we see only a dead cat or cat very much alive.

There are several ways to get your mind around what’s going on – or several interpretations (you guessed it, yet another technical term!) – with names like Copenhagen interpretation, many worlds interpretation, etc, but the key thing is that the theory is mute on the interpretations … it simply says you can calculate stuff using the equations, and what your calculations show is what you’ll see, in any experiment.

Fast forward to some time after Schrödinger – and Einstein, Podolsky, and Rosen – had died, and we find that tests of the EPR paradox were proposed, then conducted, and the universe does indeed seem to behave just like schrodinger’s cat! In fact, the results from these experimental tests are used for a kind of uncrackable cryptography, and the basis for a revolutionary kind of computer.

Keen to learn more? Try these: Schrödinger’s Rainbow is a slideshow review of the general topic (California Institute of Technology; caution, 3MB PDF file!); Schrodinger’s cat comes into view, a news story on a macroscopic demonstration; and Schrödinger’s Cat (University of Houston).

Schrodinger’s cat is indirectly referenced in several Astronomy Cast episodes, among them Quantum Mechanics, and Entanglement; check them out!

Sources: Cornell University, Wikipedia

Nuclear Fusion Power Closer to Reality Say Two Separate Teams

Nuclear Physics
Nuclear fusion. Credit: Lancaster University

[/caption]

For years, scientists have been trying to replicate the type of nuclear fusion that occurs naturally in stars in laboratories here on Earth in order to develop a clean and almost limitless source of energy. This week, two different research teams report significant headway in achieving inertial fusion ignition—a strategy to heat and compress a fuel that might allow scientists to harness the intense energy of nuclear fusion. One team used a massive laser system to test the possibility of heating heavy hydrogen atoms to ignite. The second team used a giant levitating magnet to bring matter to extremely high densities — a necessary step for nuclear fusion.

Unlike nuclear fission, which tears apart atoms to release energy and highly radioactive by-products, fusion involves putting immense pressure, or “squeezing” two heavy hydrogen atoms, called deuterium and tritium together so they fuse. This produces harmless helium and vast amounts of energy.

Recent experiments at the National Ignition Facility in Livermore, California used a massive laser system the size of three football fields. Siegfried Glenzer and his team aimed 192 intense laser beams at a small capsule—the size needed to store a mixture of deuterium and tritium, which upon implosion, can trigger burning fusion plasmas and an outpouring of usable energy. The researchers heated the capsule to 3.3 million Kelvin, and in doing so, paved the way for the next big step: igniting and imploding a fuel-filled capsule.

In a second report released earlier this week, researchers used a Levitated Dipole Experiment, or LDX, and suspended a giant donut-shaped magnet weighing about a half a ton in midair using an electromagnetic field. The researchers used the magnet to control the motion of an extremely hot gas of charged particles, called a plasma, contained within its outer chamber.

The donut magnet creates a turbulence called “pinching” that causes the plasma to condense, instead of spreading out, which usually happens with turbulence. This is the first time the “pinching” has been created in a laboratory. It has been seen in plasma in the magnetic fields of Earth and Jupiter.
A much bigger ma LDX would have to be built to reach the density levels needed for fusion, the scientists said.

Paper: Symmetric Inertial Confinement Fusion Implosions at Ultra-High Laser Energies

Sources: Science Magazine, LiveScience