Small Asteroids, Bread Flour, and a Dutch Physicist’s 150-year Old Theory

Itokawa, a dusty asteroid (Credit: JAXA)

[/caption]
No, it’s not the Universe Puzzle No. 3; rather, it’s an intriguing result from recent work into the strange shapes and composition of small asteroids.

Images sent back from space missions suggest that smaller asteroids are not pristine chunks of rock, but are instead covered in rubble that ranges in size from meter-sized boulders to flour-like dust. Indeed some asteroids appear to be up to 50% empty space, suggesting that they could be collections of rubble with no solid core.

But how do these asteroids form and evolve? And if we ever have to deflect one, to avoid the fate of the dinosaurs, how to do so without breaking it up, and making the danger far greater?

Johannes Diderik van der Waals (1837-1923), with a little help from Daniel Scheeres, Michael Swift, and colleagues, to the rescue.

Rocks and dust on asteroid Eros (Credit: NASA)

Asteroids tend to spin rapidly on their axes – and gravity at the surface of smaller bodies can be one thousandth or even one millionth of that on Earth. As a result scientists are left wondering how the rubble clings on to the surface. “The few images that we have of asteroid surfaces are a challenge to understand using traditional geophysics,” University of Colorado’s Scheeres explained.

To get to the bottom of this mystery, the team – Daniel Scheeres, colleagues at the University of Colorado, and Michael Swift at the University of Nottingham – made a thorough study of the relevant forces involved in binding rubble to an asteroid. The formation of small bodies in space involves gravity and cohesion – the latter being the attraction between molecules at the surface of materials. While gravity is well understood, the nature of the cohesive forces at work in the rubble and their relative strengths is much less well known.

The team assumed that the cohesive forces between grains are similar to that found in “cohesive powders” – which include bread flour – because such powders resemble what has been seen on asteroid surfaces. To gauge the significance of these forces, the team considered their strength relative to the gravitational forces present on a small asteroid where gravity at the surface is about one millionth that on Earth. The team found that gravity is an ineffective binding force for rocks observed on smaller asteroids. Electrostatic attraction was also negligible, other than where a portion of the asteroid this is illuminated by the Sun comes into contact with a dark portion.

Fast backward to the mid-19th century, a time when the existence of molecules was controversial, and inter-molecular forces pure science fiction (except, of course, that there was no such thing then). Van der Waals’ doctoral thesis provided a powerful explanation for the transition between gaseous and liquid phases, in terms of weak forces between the constituent molecules, which he assumed have a finite size (more than half a century was to pass before these forces were understood, quantitatively, in terms of quantum mechanics and atomic theory).

Van der Waals forces – weak electrostatic attractions between adjacent atoms or molecules that arise from fluctuations in the positions of their electrons – seem to do the trick for particles that are less than about one meter in size. The size of the van der Waals force is proportional to the contact surface area of a particle – unlike gravity, which is proportional to the mass (and therefore volume) of the particle. As a result, the relative strength of van der Waals compared with gravity increases as the particle gets smaller.

This could explain, for example, recent observations by Scheeres and colleagues that small asteroids are covered in fine dust – material that some scientists thought would be driven away by solar radiation. The research can also have implications on how asteroids respond to the “YORP effect” – the increase of the angular velocity of small asteroids by the absorption of solar radiation. As the bodies spin faster, this recent work suggests that they would expel larger rocks while retaining smaller ones. If such an asteroid were a collection of rubble, the result could be an aggregate of smaller particles held together by van der Waals forces.

Asteroid expert Keith Holsapple of the University of Washington is impressed that not only has Scheeres’ team estimated the forces in play on an asteroid, it has also looked at how these vary with asteroid and particle size. “This is a very important paper that addresses a key issue in the mechanics of the small bodies of the solar system and particle mechanics at low gravity,” he said.

Scheeres noted that testing this theory requires a space mission to determine the mechanical and strength properties of an asteroid’s surface. “We are developing such a proposal now,” he said.

Source: Physics World. “Scaling forces to asteroid surfaces: The role of cohesion” is a preprint by Scheeres, et al. (arXiv:1002.2478), submitted for publication in Icarus.

ESA’s Tough Choice: Dark Matter, Sun Close Flyby, Exoplanets (Pick Two)

Thales Alenia Space and EADS Astrium concepts for Euclid (ESA)


Key questions relevant to fundamental physics and cosmology, namely the nature of the mysterious dark energy and dark matter (Euclid); the frequency of exoplanets around other stars, including Earth-analogs (PLATO); take the closest look at our Sun yet possible, approaching to just 62 solar radii (Solar Orbiter) … but only two! What would be your picks?

These three mission concepts have been chosen by the European Space Agency’s Science Programme Committee (SPC) as candidates for two medium-class missions to be launched no earlier than 2017. They now enter the definition phase, the next step required before the final decision is taken as to which missions are implemented.

These three missions are the finalists from 52 proposals that were either made or carried forward in 2007. They were whittled down to just six mission proposals in 2008 and sent for industrial assessment. Now that the reports from those studies are in, the missions have been pared down again. “It was a very difficult selection process. All the missions contained very strong science cases,” says Lennart Nordh, Swedish National Space Board and chair of the SPC.

And the tough decisions are not yet over. Only two missions out of three of them: Euclid, PLATO and Solar Orbiter, can be selected for the M-class launch slots. All three missions present challenges that will have to be resolved at the definition phase. A specific challenge, of which the SPC was conscious, is the ability of these missions to fit within the available budget. The final decision about which missions to implement will be taken after the definition activities are completed, which is foreseen to be in mid-2011.
[/caption]
Euclid is an ESA mission to map the geometry of the dark Universe. The mission would investigate the distance-redshift relationship and the evolution of cosmic structures. It would achieve this by measuring shapes and redshifts of galaxies and clusters of galaxies out to redshifts ~2, or equivalently to a look-back time of 10 billion years. It would therefore cover the entire period over which dark energy played a significant role in accelerating the expansion.

By approaching as close as 62 solar radii, Solar Orbiter would view the solar atmosphere with high spatial resolution and combine this with measurements made in-situ. Over the extended mission periods Solar Orbiter would deliver images and data that would cover the polar regions and the side of the Sun not visible from Earth. Solar Orbiter would coordinate its scientific mission with NASA’s Solar Probe Plus within the joint HELEX program (Heliophysics Explorers) to maximize their combined science return.

Thales Alenis Space concept, from assessment phase (ESA)

PLATO (PLAnetary Transit and Oscillations of stars) would discover and characterize a large number of close-by exoplanetary systems, with a precision in the determination of mass and radius of 1%.

In addition, the SPC has decided to consider at its next meeting in June, whether to also select a European contribution to the SPICA mission.

SPICA would be an infrared space telescope led by the Japanese Space Agency JAXA. It would provide ‘missing-link’ infrared coverage in the region of the spectrum between that seen by the ESA-NASA Webb telescope and the ground-based ALMA telescope. SPICA would focus on the conditions for planet formation and distant young galaxies.

“These missions continue the European commitment to world-class space science,” says David Southwood, ESA Director of Science and Robotic Exploration, “They demonstrate that ESA’s Cosmic Vision programme is still clearly focused on addressing the most important space science.”

Source: ESA chooses three scientific missions for further study

Does Zonal Swishing Play a Part in Earth’s Magnetic Field Reversals?

Zonal swishing in the Earth's outer core (Credit: Akira Kageyama, Kobe University)

[/caption]
Why does the Earth’s magnetic field ‘flip’ every million years or so? Whatever the reason, or reasons, the way the liquid iron of the Earth’s outer core flows – its currents, its structure, its long-term cycles – is important, either as cause, effect, or a bit of both.

The main component of the Earth’s field – which defines the magnetic poles – is a dipole generated by the convection of molten nickel-iron in the outer core (the inner core is solid, so its role is secondary; remember that the Earth’s core is well above the Curie temperature, so the iron is not ferromagnetic).

But what about the fine structure? Does the outer core have the equivalent of the Earth’s atmosphere’s jet streams, for example? Recent research by a team of geophysicists in Japan sheds some light on these questions, and so hints at what causes magnetic pole flips.

About the image: This image shows how an imaginary particle suspended in the liquid iron outer core of the Earth tends to flow in zones even when conditions in the geodynamo are varied. The colors represent the vorticity or “amount of rotation” that this particle experiences, where red signifies positive (east-west) flow and blue signifies negative (west-east) flow. Left to right shows how the flow responds to increasing Rayleigh numbers, which is associated with flow driven by buoyancy. Top to bottom shows how flow responds to increasing angular velocities of the whole geodynamo system.

The jet stream winds that circle the globe and those in the atmospheres of the gas giants (Jupiter, Saturn, etc) are examples of zonal flows. “A common feature of these zonal flows is that they are spontaneously generated in turbulent systems. Because the Earth’s outer core is believed to be in a turbulent state, it is possible that there is zonal flow in the liquid iron of the outer core,” Akira Kageyama at Kobe University and colleagues say, in their recent Nature paper. The team found a secondary flow pattern when they modeled the geodynamo – which generates the Earth’s magnetic field – to build a more detailed picture of convection in the Earth’s outer core, a secondary flow pattern consisting of inner sheet-like radial plumes, surrounded by westward cylindrical zonal flow.

This work was carried out using the Earth Simulator supercomputer, based in Japan, which offered sufficient spatial resolution to determine these secondary effects. Kageyama and his team also confirmed, using a numerical model, that this dual-convection structure can co-exist with the dominant convection that generates the north and south poles; this is a critical consistency check on their models, “We numerically confirm that the dual-convection structure with such a zonal flow is stable under a strong, self-generated dipole magnetic field,” they write.

This kind of zonal flow in the outer core has not been seen in geodynamo models before, due largely to lack of sufficient resolution in earlier models. What role these zonal flows play in the reversal of the Earth’s magnetic field is one area of research that Kageyama and his team’s results that will now be able to be pursued.

Sources: Physics World, based on a paper in the 11 February, 2010 issue of Nature. Earth Simulator homepage

Einstein’s General Relativity Tested Again, Much More Stringently

Einstein and Relativity
Albert Einstein

[/caption]
This time it was the gravitational redshift part of General Relativity; and the stringency? An astonishing better-than-one-part-in-100-million!

How did Steven Chu (US Secretary of Energy, though this work was done while he was at the University of California Berkeley), Holger Müler (Berkeley), and Achim Peters (Humboldt University in Berlin) beat the previous best gravitational redshift test (in 1976, using two atomic clocks – one on the surface of the Earth and the other sent up to an altitude of 10,000 km in a rocket) by a staggering 10,000 times?

By exploited wave-particle duality and superposition within an atom interferometer!

Cesium atom interferometer test of gravitational redshift (Courtesy Nature)

About this figure: Schematic of how the atom interferometer operates. The trajectories of the two atoms are plotted as functions of time. The atoms are accelerating due to gravity and the oscillatory lines depict the phase accumulation of the matter waves. Arrows indicate the times of the three laser pulses. (Courtesy: Nature).

Gravitational redshift is an inevitable consequence of the equivalence principle that underlies general relativity. The equivalence principle states that the local effects of gravity are the same as those of being in an accelerated frame of reference. So the downward force felt by someone in a lift could be equally due to an upward acceleration of the lift or to gravity. Pulses of light sent upwards from a clock on the lift floor will be redshifted when the lift is accelerating upwards, meaning that this clock will appear to tick more slowly when its flashes are compared at the ceiling of the lift to another clock. Because there is no way to tell gravity and acceleration apart, the same will hold true in a gravitational field; in other words the greater the gravitational pull experienced by a clock, or the closer it is to a massive body, the more slowly it will tick.

Confirmation of this effect supports the idea that gravity is geometry – a manifestation of spacetime curvature – because the flow of time is no longer constant throughout the universe but varies according to the distribution of massive bodies. Exploring the idea of spacetime curvature is important when distinguishing between different theories of quantum gravity because there are some versions of string theory in which matter can respond to something other than the geometry of spacetime.

Gravitational redshift, however, as a manifestation of local position invariance (the idea that the outcome of any non-gravitational experiment is independent of where and when in the universe it is carried out) is the least well confirmed of the three types of experiment that support the equivalence principle. The other two – the universality of freefall and local Lorentz invariance – have been verified with precisions of 10-13 or better, whereas gravitational redshift had previously been confirmed only to a precision of 7×10-5.

In 1997 Peters used laser trapping techniques developed by Chu to capture cesium atoms and cool them to a few millionths of a degree K (in order to reduce their velocity as much as possible), and then used a vertical laser beam to impart an upward kick to the atoms in order to measure gravitational freefall.

Now, Chu and Müller have re-interpreted the results of that experiment to give a measurement of the gravitational redshift.

In the experiment each of the atoms was exposed to three laser pulses. The first pulse placed the atom into a superposition of two equally probable states – either leaving it alone to decelerate and then fall back down to Earth under gravity’s pull, or giving it an extra kick so that it reached a greater height before descending. A second pulse was then applied at just the right moment so as to push the atom in the second state back faster toward Earth, causing the two superposition states to meet on the way down. At this point the third pulse measured the interference between these two states brought about by the atom’s existence as a wave, the idea being that any difference in gravitational redshift as experienced by the two states existing at difference heights above the Earth’s surface would be manifest as a change in the relative phase of the two states.

The virtue of this approach is the extremely high frequency of a cesium atom’s de Broglie wave – some 3×1025Hz. Although during the 0.3 s of freefall the matter waves on the higher trajectory experienced an elapsed time of just 2×10-20s more than the waves on the lower trajectory did, the enormous frequency of their oscillation, combined with the ability to measure amplitude differences of just one part in 1000, meant that the researchers were able to confirm gravitational redshift to a precision of 7×10-9.

As Müller puts it, “If the time of freefall was extended to the age of the universe – 14 billion years – the time difference between the upper and lower routes would be a mere one thousandth of a second, and the accuracy of the measurement would be 60 ps, the time it takes for light to travel about a centimetre.”

Müller hopes to further improve the precision of the redshift measurements by increasing the distance between the two superposition states of the cesium atoms. The distance achieved in the current research was a mere 0.1 mm, but, he says, by increasing this to 1 m it should be possible to detect gravitational waves, predicted by general relativity but not yet directly observed.

Sources: Physics World; the paper is in the 18 February, 2010 issue of Nature

Universe to WMAP: ΛCDM Rules, OK?

Temperature and polarization around hot and cold spots (Credit: NASA / WMAP Science Team)

[/caption]
The Wilkinson Microwave Anisotropy Probe (WMAP) science team has finished analyzing seven full years’ of data from the little probe that could, and once again it seems we can sum up the universe in six parameters and a model.

Using the seven-year WMAP data, together with recent results on the large-scale distribution of galaxies, and an updated estimate of the Hubble constant, the present-day age of the universe is 13.75 (plus-or-minus 0.11) billion years, dark energy comprises 72.8% (+/- 1.5%) of the universe’s mass-energy, baryons 4.56% (+/- 0.16%), non-baryonic matter (CDM) 22.7% (+/- 1.4%), and the redshift of reionization is 10.4 (+/- 1.2).

In addition, the team report several new cosmological constraints – primordial abundance of helium (this rules out various alternative, ‘cold big bang’ models), and an estimate of a parameter which describes a feature of density fluctuations in the very early universe sufficiently precisely to rule out a whole class of inflation models (the Harrison-Zel’dovich-Peebles spectrum), to take just two – as well as tighter limits on many others (number of neutrino species, mass of the neutrino, parity violations, axion dark matter, …).

The best eye-candy from the team’s six papers are the stacked temperature and polarization maps for hot and cold spots; if these spots are due to sound waves in matter frozen in when radiation (photons) and baryons parted company – the cosmic microwave background (CMB) encodes all the details of this separation – then there should be nicely circular rings, of rather exact sizes, around the spots. Further, the polarization directions should switch from radial to tangential, from the center out (for cold spots; vice versa for hot spots).

And that’s just what the team found!

Concerning Dark Energy. Since the Five-Year WMAP results were published, several independent studies with direct relevance to cosmology have been published. The WMAP team took those from observations of the baryon acoustic oscillations (BAO) in the distribution of galaxies; of Cepheids, supernovae, and a water maser in local galaxies; of time-delay in a lensed quasar system; and of high redshift supernovae, and combined them to reduce the nooks and crannies in parameter space in which non-cosmological constant varieties of dark energy could be hiding. At least some alternative kinds of dark energy may still be possible, but for now Λ, the cosmological constant, rules.

Concerning Inflation. Very, very, very early in the life of the universe – so the theory of cosmic inflation goes – there was a period of dramatic expansion, and the tiny quantum fluctuations before inflation became the giant cosmic structures we see today. “Inflation predicts that the statistical distribution of primordial fluctuations is nearly a Gaussian distribution with random phases. Measuring deviations from a Gaussian distribution,” the team reports, “is a powerful test of inflation, as how precisely the distribution is (non-) Gaussian depends on the detailed physics of inflation.” While the limits on non-Gaussianity (as it is called), from analysis of the WMAP data, only weakly constrain various models of inflation, they do leave almost nowhere for cosmological models without inflation to hide.

Concerning ‘cosmic shadows’ (the Sunyaev-Zel’dovich (SZ) effect). While many researchers have looked for cosmic shadows in WMAP data before – perhaps the best known to the general public is the 2006 Lieu, Mittaz, and Zhang paper (the SZ effect: hot electrons in the plasma which pervades rich clusters of galaxies interact with CMB photons, via inverse Compton scattering) – the WMAP team’s recent analysis is their first to investigate this effect. They detect the SZ effect directly in the nearest rich cluster (Coma; Virgo is behind the Milky Way foreground), and also statistically by correlation with the location of some 700 relatively nearby rich clusters. While the WMAP team’s finding is consistent with data from x-ray observations, it is inconsistent with theoretical models. Back to the drawing board for astrophysicists studying galaxy clusters.

Seven Year Microwave Sky (Credit: NASA/WMAP Science Team)

I’ll wrap up by quoting Komatsu et al. “The standard ΛCDM cosmological model continues to be an exquisite fit to the existing data.”

Primary source: Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation (arXiv:1001.4738). The five other Seven-Year WMAP papers are: Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Are There Cosmic Microwave Background Anomalies? (arXiv:1001.4758), Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Planets and Celestial Calibration Sources (arXiv:1001.4731), Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results (arXiv:1001.4744), Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Power Spectra and WMAP-Derived Parameters (arXiv:1001.4635), and Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Galactic Foreground Emission (arXiv:1001.4555). Also check out the official WMAP website.

What is Schrodinger’s Cat?

Schrodinger’s cat is named after Erwin Schrödinger, a physicist from Austria who made substantial contributions to the development of quantum mechanics in the 1930s (he won a Nobel Prize for some of this work, in 1933). Apart from the poor cat (more later), his name is forever associated with quantum mechanics via the Schrödinger equation, which every physics student has to grapple with.

Schrodinger’s cat is actually a thought experiment (Gedankenexperiment) – and the cat may not have been Erwin’s, but his wife’s, or one of his lovers’ (Erwin had an unconventional lifestyle) – designed to test a really weird implication of the physics he and other physicists was developing at the time. It was motivated by a 1935 paper by Einstein, Podolsky, and Rosen; this paper is the source of the famous EPR paradox.

In the thought experiment, Schrodinger’s cat is placed inside a box containing a piece of radioactive material, and a Geiger counter wired to a flask of poison in such a way that if the Geiger counter detects a decay, then the flask is smashed, the poison gas released, and the cat dies (fun piece of trivia: an animal rights group accused physicists of cruelty to animals, based on a distorted version of this thought experiment! though maybe that’s just an urban legend). The half-life of the radioactive material is an hour, so after an hour, there is a 50% probability that the cat is dead, and an equal probability that it is alive. In quantum mechanics, these two states are superposed (a technical term), and the cat is neither dead nor alive, or half-dead and half-alive, or … which is really, really weird.

Now the theory – quantum mechanics – has been tested perhaps more thoroughly than any other theory in physics, and it seems to describe how the universe behaves with extraordinary accuracy. And the theory says that when the box is opened – to see if the cat is dead, alive, half-dead and half-alive, or anything else – the wavefunction (describing the cat, Geiger counter, etc) collapses, or decoheres, or that the states are no longer entangled (all technical terms), and we see only a dead cat or cat very much alive.

There are several ways to get your mind around what’s going on – or several interpretations (you guessed it, yet another technical term!) – with names like Copenhagen interpretation, many worlds interpretation, etc, but the key thing is that the theory is mute on the interpretations … it simply says you can calculate stuff using the equations, and what your calculations show is what you’ll see, in any experiment.

Fast forward to some time after Schrödinger – and Einstein, Podolsky, and Rosen – had died, and we find that tests of the EPR paradox were proposed, then conducted, and the universe does indeed seem to behave just like schrodinger’s cat! In fact, the results from these experimental tests are used for a kind of uncrackable cryptography, and the basis for a revolutionary kind of computer.

Keen to learn more? Try these: Schrödinger’s Rainbow is a slideshow review of the general topic (California Institute of Technology; caution, 3MB PDF file!); Schrodinger’s cat comes into view, a news story on a macroscopic demonstration; and Schrödinger’s Cat (University of Houston).

Schrodinger’s cat is indirectly referenced in several Astronomy Cast episodes, among them Quantum Mechanics, and Entanglement; check them out!

Sources: Cornell University, Wikipedia

Nuclear Fusion Power Closer to Reality Say Two Separate Teams

Nuclear Physics
Nuclear fusion. Credit: Lancaster University

[/caption]

For years, scientists have been trying to replicate the type of nuclear fusion that occurs naturally in stars in laboratories here on Earth in order to develop a clean and almost limitless source of energy. This week, two different research teams report significant headway in achieving inertial fusion ignition—a strategy to heat and compress a fuel that might allow scientists to harness the intense energy of nuclear fusion. One team used a massive laser system to test the possibility of heating heavy hydrogen atoms to ignite. The second team used a giant levitating magnet to bring matter to extremely high densities — a necessary step for nuclear fusion.

Unlike nuclear fission, which tears apart atoms to release energy and highly radioactive by-products, fusion involves putting immense pressure, or “squeezing” two heavy hydrogen atoms, called deuterium and tritium together so they fuse. This produces harmless helium and vast amounts of energy.

Recent experiments at the National Ignition Facility in Livermore, California used a massive laser system the size of three football fields. Siegfried Glenzer and his team aimed 192 intense laser beams at a small capsule—the size needed to store a mixture of deuterium and tritium, which upon implosion, can trigger burning fusion plasmas and an outpouring of usable energy. The researchers heated the capsule to 3.3 million Kelvin, and in doing so, paved the way for the next big step: igniting and imploding a fuel-filled capsule.

In a second report released earlier this week, researchers used a Levitated Dipole Experiment, or LDX, and suspended a giant donut-shaped magnet weighing about a half a ton in midair using an electromagnetic field. The researchers used the magnet to control the motion of an extremely hot gas of charged particles, called a plasma, contained within its outer chamber.

The donut magnet creates a turbulence called “pinching” that causes the plasma to condense, instead of spreading out, which usually happens with turbulence. This is the first time the “pinching” has been created in a laboratory. It has been seen in plasma in the magnetic fields of Earth and Jupiter.
A much bigger ma LDX would have to be built to reach the density levels needed for fusion, the scientists said.

Paper: Symmetric Inertial Confinement Fusion Implosions at Ultra-High Laser Energies

Sources: Science Magazine, LiveScience

What is the Boltzmann Constant?

Ludwig Boltzmann

There are actually two Boltzmann constants, the Boltzmann constant and the Stefan-Boltzmann constant; both play key roles in astrophysics … the first bridges the macroscopic and microscopic worlds, and provides the basis for the zero-th law of thermodynamics; the second is in the equation for blackbody radiation.

The zero-th law of thermodynamics is, in essence, what allows us to define temperature; if you could ‘look inside’ an isolated system (in equilibrium), the proportion of constituents making up the system with energy E is a function of E, and the Boltzmann constant (k or kB). Specifically, the probability is proportional to:

e-E/kT

where T is the temperature. In SI units, k is 1.38 x 10-23 J/K (that’s joules per Kelvin). How Boltzmann’s constant links the macroscopic and microscopic worlds may perhaps be easiest seen like this: k is the gas constant R (remember the ideal gas law, pV = nRT) divided by Avogadro’s number.

Among the many places k appears in physics is in the Maxwell-Boltzmann distribution, which describes the distribution of speeds of molecules in a gas … and thus why the Earth’s (and Venus’) atmosphere has lost all its hydrogen (and only keeps its helium because what is lost gets replaced by helium from radioactive decay, in rocks), and why the gas giants (and stars) can keep theirs.

The Stefan-Boltzmann constant (?), ties the amount of energy radiated by a black body (per unit of area of its surface) to the blackbody temperature (this is the Stefan-Boltzmann law). ? is made up of other constants: pi, a couple of integers, the speed of light, Planck’s constant, … and the Boltzmann constant! As astronomers rely almost entirely on detection of photons (electromagnetic radiation) to observe the universe, it will surely come as no surprise to learn that astrophysics students become very familiar with the Stefan-Boltzmann law, very early in their studies! After all, absolute luminosity (energy radiated per unit of time) is one of the key things astronomers try to estimate.

Why does the Boltzmann constant pop up so often? Because the large-scale behavior of systems follows from what’s happening to the individual components of those systems, and the study of how to get from the small to the big (in classical physics) is statistical mechanics … which Boltzmann did most of the original heavy lifting in (along with Maxwell, Planck, and others); indeed, it was Planck who gave k its name, after Boltzmann’s death (and Planck who had Boltzmann’s entropy equation – with k – engraved on his tombstone).

Want to learn more? Here are some resources, at different levels: Ideal Gas Law (from Hyperphysics), Radiation Laws (from an introductory astronomy course), and University of Texas (Austin)’s Richard Fitzpatrick’s course (intended for upper level undergrad students) Thermodynamics & Statistical Mechanics.

Sources:
Hyperphysics
Wikipedia

Physicists Tie Beam of Light into Knots

The colored circle represents the hologram, out of which the knotted optical vortex emerges. Credit: University of Bristol

[/caption]
Imagine taking a beam of light and tying it in knots like a piece of string. Hard to fathom? Well, a group of physicists from the UK have achieved this remarkable feat, and they say understanding how to control light in this way has important implications for laser technology used in wide a range of industries.

“In a light beam, the flow of light through space is similar to water flowing in a river,” said Dr. Mark Dennis from the University of Bristol and lead author of a paper published in Nature Physics this week. “Although it often flows in a straight line – out of a torch, laser pointer, etc – light can also flow in whirls and eddies, forming lines in space called ‘optical vortices.’ Along these lines, or optical vortices, the intensity of the light is zero (black). The light all around us is filled with these dark lines, even though we can’t see them.”

Optical vortices can be created with holograms which direct the flow of light. In this work, the team designed holograms using knot theory – a branch of abstract mathematics inspired by knots that occur in shoelaces and rope. Using these specially designed holograms they were able to create knots in optical vortices.

This new research demonstrates a physical application for a branch of mathematics previously considered completely abstract.

“The sophisticated hologram design required for the experimental demonstration of the knotted light shows advanced optical control, which undoubtedly can be used in future laser devices,” said Miles Padgett from Glasgow University, who led the experiments

“The study of knotted vortices was initiated by Lord Kelvin back in 1867 in his quest for an explanation of atoms,” addeds Dennis, who began to study knotted optical vortices with Professor Sir Michael Berry at Bristol University in 2000. “This work opens a new chapter in that history.”

Paper: Isolated optical vortex knots by Mark R. Dennis, Robert P. King, Barry Jack, Kevin O’Holleran and Miles J. Padgett. Nature Physics, published online 17 January 2010.

Source: University of Bristol

New Pulsar “Clocks” Will Aid Gravitational Wave Detection

This illustration shows a pulsar’s magnetic field (blue) creates narrow beams of radiation (magenta). Image credit: NASA

How do you detect a ripple in space-time itself? Well, you need hundreds of precision clocks distributed throughout the galaxy, and the Fermi gamma ray telescope has given astronomers a new way to find them.

The “clocks” in question are actually millisecond pulsars – city-sized, sun-massed stars of ultradense matter that spin hundreds of times per second. Due to their powerful magnetic fields, pulsars emit most of their radiation in tightly focused beams, much like a lighthouse. Each spin of the pulsar corresponds to a “pulse” of radiation detectable from Earth. The rate at which millisecond pulsars pulse is extremely stable, so they serve as some of the most reliable clocks in the universe.

Astronomers watch for the slightest variations in the timing of millisecond pulsars which might suggest that space-time near the pulsar is being distorted by the passage of a gravitational wave. The problem is, to make a reliable measurement requires hundreds of pulsars, and until recently they have been extremely difficult to find.

“We’ve probably found far less than one percent of the millisecond pulsars in the Milky Way Galaxy,” said Scott Ransom of the National Radio Astronomy Observatory (NRAO).

Data from the Fermi gamma-ray space telescope, which started collecting data in 2008, have changed the way millisecond pulsars are detected. The Fermi telescope has identified hundreds of gamma-ray sources in the Milky Way. Gamma rays are high-energy photons, and they are produced near exotic objects, including millisecond pulsars.

“The data from Fermi were like a buried-treasure map,” Ransom said. “Using our radio telescopes to study the objects located by Fermi, we found 17 millisecond pulsars in three months. Large-scale searches had taken 10-15 years to find that many.”

Ransom and collaborator Mallory Roberts of Eureka Scientific used the National Science Foundation’s Robert C. Byrd Green Bank Telescope (GBT) to find eight of the 17 new pulsars.

Right now astronomers have only barely enough millisecond pulsars to make a convincing gravitational wave detection, but with Fermi to help identify more pulsars, the odds of detecting these ripples in space-time are steadily increasing.

Ransom and Roberts announced their discoveries today at the American Astronomical Society’s meeting in Washington, DC.

(NRAO Press Release)