What Is The Great Attractor?

What is at the Center of the Milky Way
Examining the Center of the Milky Way

There’s a strange place in the sky where everything is attracted. And unfortunately, it’s on the other side of the Milky Way, so we can’t see it. What could be doing all this attracting?

Just where the heck are we going? We’re snuggled in our little Solar System, hurtling through the cosmos at a blindingly fast of 2.2 million kilometers per hour. We’re always orbiting this, and drifting through that, and it’s somewhere out in the region that’s not as horrifically terrifying as what some of our celestial neighbors go through. But where are we going? Just around in a great big circle? Or an ellipse? Which is going around in another circle… and it’s great big circles all the way up?

Not exactly… Our galaxy and other nearby galaxies are being pulled toward a specific region of space. It’s about 150 million light years away, and here is the best part. We’re not exactly sure what it is. We call it the Great Attractor.

Part of the reason the Great Attractor is so mysterious is that it happens to lie in a direction of the sky known as the “Zone of Avoidance”. This is in the general direction of the center of our galaxy, where there is so much gas and dust that we can’t see very far in the visible spectrum. We can see how our galaxy and other nearby galaxies are moving toward the great attractor, so something must be causing things to go in that direction. That means either there must be something massive over there, or it’s due to something even more strange and fantastic.

When evidence of the Great Attractor was first discovered in the 1970s, we had no way to see through the Zone of Avoidance. But while that region blocks much of the visible light from beyond, the gas and dust doesn’t block as much infrared and x-ray light. As x-ray astronomy became more powerful, we could start to see objects within that region. What we found was a large supercluster of galaxies in the area of the Great Attractor, known as the Norma Cluster. It has a mass of about 1,000 trillion Suns. That’s thousands of galaxies.

A March 2013 picture of the Shapley Supercluster from the European Space Agency's Planck observatory. ESA describes it as "the largest cosmic structure in the local Universe." Credit: ESA & Planck Collaboration / Rosat/ Digitised Sky Survey
A March 2013 picture of the Shapley Supercluster from the European Space Agency’s Planck observatory. ESA describes it as “the largest cosmic structure in the local Universe.” Credit: ESA & Planck Collaboration / Rosat/ Digitised Sky Survey

While the Norma Cluster is massive, and local galaxies are moving toward it, it doesn’t explain the full motion of local galaxies. The mass of the Great Attractor isn’t large enough to account for the pull. When we look at an even larger region of galaxies, we find that the local galaxies and the Great Attractor are moving toward something even larger. It’s known as the Shapley Supercluster. It contains more than 8000 galaxies and has a mass of more than ten million billion Suns. The Shapley Supercluster is, in fact, the most massive galaxy cluster within a billion light years, and we and every galaxy in our corner of the Universe are moving toward it.

So as we hurtle through the cosmos, gravity shapes the path we travel. We’re pulled towards the Great Attractor, and despite its glorious title, it appears, in fact to be a perfectly normal collection of galaxies, which just happens to be hidden.

What do you think? What are you hoping we’ll discover over in the region of space we’re drifting towards?

And if you like what you see, come check out our Patreon page and find out how you can get these videos early while helping us bring you more great content!

Are the BICEP2 Results Invalid? Probably Not.

Galactic radio loops, with BICEP2 region indicated. Credit: Philipp Mertsch

Recently rumors have been flying that the BICEP2 results regarding the cosmic inflationary period may be invalid. It all started with a post by Dan Falkowski on his blog Resonaances, where he claimed that the BICEP2 had misinterpreted some data, which rendered their results invalid, or at least questionable. The story was then picked up by Nature’s Blog and elsewhere, which has sparked some heated debate.

 So what’s really going on?

For those who might not remember, BICEP2 is a project working to detect polarized light within the cosmic microwave background (CMB). Specifically they were looking for a type of polarization known as B-mode polarization. Detection of B-mode polarization is important because one mechanism for it is cosmic inflation in the early universe, which is exactly what BICEP2 claimed to have evidence of.

Part of the reason BICEP2 got so much press is because B-mode polarization is particularly difficult to detect. It is a small signal, and you have to filter through a great deal of observational data to be sure that your result is valid.  But you also have to worry about other sources that look like B-mode polarization, and if you don’t account for them properly, then you could get a “false positive.” That’s where this latest drama arises.

In general this challenge is sometimes called the foreground problem.  Basically, the cosmic microwave background is the most distant light we can observe. All the galaxies, dust, interstellar plasma and our own galaxy is between us and the CMB.  So to make sure that the data you gather is really from the CMB, you have to account for all the stuff in the way (the foreground).  We have ways of doing this, but it is difficult. The big challenge is to account for everything.

A map of foreground polarization from the Milky Way. Credit: ESA and the Planck Collaboration
A map of foreground polarization from the Milky Way. Credit: ESA and the Planck Collaboration

Soon after the BICEP2 results, another team noted a foreground effect that could effect the BICEP2 results. It involves an effect known as radio loops, where dust particles trapped in interstellar magnetic fields can emit polarized light similar to B-mode polarization. How much of an effect this might have is unclear. Another project being done with the Planck satellite is also looking at this foreground effect, and has released some initial results (seen in the figure), but hasn’t yet released the actual data yet.

Now it has come to light that BICEP2 did, in fact, take some of this foreground polarization into account, in part using results from Planck. But since the raw data hadn’t been released, the team used data taken from a PDF slide of Planck results and basically reverse-engineered the Planck data.  It is sometimes referred to as “data scraping”, and it isn’t ideal, but it works moderately well. Now there is some debate as to whether that slide presented the real foreground polarization or some averaged polarization. If it is the latter, then the BICEP2 results may have underestimated the foreground effect. Does this mean the BICEP2 results are completely invalid? Given what I’ve seen so far, I don’t think it does. Keep in mind that the Planck foreground is one of several foreground effects that BICEP2 did account for. It could be a large error, but it could also be a rather minor one.

The important thing to keep in mind is that the BICEP2 paper is still undergoing peer review.  Critical analysis of the paper is exactly what should happen, and is happening.  This type review used to be confined to the ivory towers, but with social media it now happens in the open.  This is how science is done. BICEP2 has made a bold claim, and now everyone gets to whack at them like a piñata.

The BICEP2 team stands by their work, and so we’ll have to see whether it holds up to peer review.  We’ll also have to wait for the Planck team to release their results on B-mode polarization. Eventually the dust will settle and we’ll have a much better handle on the results.

How CERN’s Discovery of Exotic Particles May Affect Astrophysics

The difference between a neutron star and a quark star (Chandra)

You may have heard that CERN announced the discovery (confirmation, actually. See addendum below.) of a strange particle known as Z(4430).  A paper summarizing the results has been published on the physics arxiv, which is a repository for preprint (not yet peer reviewed) physics papers.  The new particle is about 4 times more massive than a proton, has a negative charge, and appears to be a theoretical particle known as a tetraquark.  The results are still young, but if this discovery holds up it could have implications for our understanding of neutron stars.

A periodic table of elementary particles. Credit: Wikipedia
A periodic table of elementary particles.
Credit: Wikipedia

The building blocks of matter are made of leptons (such as the electron and neutrinos) and quarks (which make up protons, neutrons, and other particles).  Quarks are very different from other particles in that they have an electric charge that is 1/3 or 2/3 that of the electron and proton.  They also possess a different kind of “charge” known as color.  Just as electric charges interact through an electromagnetic force, color charges interact through the strong nuclear force.  It is the color charge of quarks that works to hold the nuclei of atoms together. Color charge is much more complex than electric charge.  With electric charge there is simply positive (+) and its opposite, negative (-).  With color, there are three types (red, green, and blue) and their opposites (anti-red, anti-green, and anti-blue).

Because of the way the strong force works, we can never observe a free quark.  The strong force requires that quarks always group together to form a particle that is color neutral. For example, a proton consists of three quarks (two up and one down), where each quark is a different color.  With visible light, adding red, green and blue light gives you white light, which is colorless. In the same way, combining a red, green and blue quark gives you a particle which is color neutral.  This similarity to the color properties of light is why quark charge is named after colors.

Combining a quark of each color into groups of three is one way to create a color neutral particle, and these are known as baryons.  Protons and neutrons are the most common baryons.  Another way to combine quarks is to pair a quark of a particular color with a quark of its anti-color.  For example, a green quark and an anti-green quark could combine to form a color neutral particle.  These two-quark particles are known as mesons, and were first discovered in 1947.  For example, the positively charged pion consists of an up quark and an antiparticle down quark.

Under the rules of the strong force, there are other ways quarks could combine to form a neutral particle.  One of these, the tetraquark, combines four quarks, where two particles have a particular color and the other two have the corresponding anti-colors.  Others, such as the pentaquark (3 colors + a color anti-color pair) and the hexaquark (3 colors + 3 anti-colors) have been proposed.  But so far all of these have been hypothetical.  While such particles would be color neutral, it is also possible that they aren’t stable and would simply decay into baryons and mesons.

There has been some experimental hints of tetraquarks, but this latest result is the strongest evidence of 4 quarks forming a color neutral particle.  This means that quarks can combine in much more complex ways than we originally expected, and this has implications for the internal structure of neutron stars.

Very simply, the traditional model of a neutron star is that it is made of neutrons.  Neutrons consist of three quarks (two down and one up), but it is generally thought that particle interactions within a neutron star are interactions between neutrons.  With the existence of tetraquarks, it is possible for neutrons within the core to interact strongly enough to create tetraquarks.  This could even lead to the production of pentaquarks and hexaquarks, or even that quarks could interact individually without being bound into color neutral particles.  This would produce a hypothetical object known as a quark star.

This is all hypothetical at this point, but verified evidence of tetraquarks will force astrophysicists to reexamine some the assumptions we have about the interiors of neutron stars.

Addendum: It has been pointed out that CERN’s results are not an original discovery, but rather a confirmation of earlier results by the Belle Collaboration.  The Belle results can be found in a 2008 paper in Physical Review Letters, as well as a 2013 paper in Physical Review D.  So credit where credit is due.

We’ve Discovered Inflation! Now What?

Polarization patterns imprinted in the CMB. Image Credit: CfA

Days like these make being an astrophysicist interesting.  On the one hand, there is the annoucement of BICEP2 that the long-suspected theory of an inflationary big bang is actually true.  It’s the type of discovery that makes you want to grab random people off the street and tell them what an amazing thing the Universe is.  On the other hand, this is exactly the type of moment when we should be calm, and push back on the claims made by one research team.  So let’s take a deep breath and look at what we know, and what we don’t.

Multiverse Theory
Inflation could mean our Universe is just one of many. Credit: Florida State University

First off, let’s dispel a few rumors.  This latest research is not the first evidence of gravitational waves.  The first indirect evidence for gravitational waves was found in the orbital decay of a binary pulsar by Russell Hulse and Joseph Taylor, for which they were awarded the Nobel prize in 1993. This new work is also not the first discovery of polarization within the cosmic microwave background, or even the first observation of B-mode polarization.  This new work is exciting because it finds evidence of a specific form of B-mode polarization due to primordial gravitational waves. The type of gravitational waves that would only be caused by inflation during the earliest moments of the Universe.

It should also be noted that this new work hasn’t yet been peer reviewed.  It will be, and it will most likely pass muster, but until it does we should be a bit cautious about the results.  Even then these results will need to be verified by other experiments.  For example, data from the Planck space telescope should be able to confirm these results assuming they’re valid.

That said, these new results are really, really interesting.

E-modes (left side)
E-modes (left) and B-modes (right)

What the team did was to analyze what is known as B-mode polarization within the cosmic microwave background (CMB).  Light waves oscillate perpendicular to their direction of motion, similar to the way water waves oscillate up and down while they travel along the surface of water.  This means light can have an orientation.  For light from the CMB, this orientation has two modes, known as E and B.  The E-mode polarization is caused by temperature fluctuations in the CMB, and was first observed in 2002 by the DASI interferometer.

The B-mode polarization can occur in two ways.  The first way is due to gravitational lensing.  The first is due to gravitational lensing of the E-mode.  The cosmic microwave background we see today has travelled for more than 13 billion years before reaching us.  Along its journey some of it has passed close enough to galaxies and the like to be gravitationally lensed.  This gravitational lensing twists the polarization a bit, giving some of it a B-mode polarization. This type was first observed in July of 2013.  The second way is due to gravitational waves from the early inflationary period of the universe.  As inflationary period occurred, then it produced gravitational waves on a cosmic scale.  Just as the gravitational lensing produces B-mode polarization, these primordial gravitational waves produce a B-mode effect.  The discovery of primordial wave B-mode polarization is what was announced today.

The effect of early inflation on the size of the universe. Credit: NASA/COBE
The effect of early inflation on the size of the universe. Credit: NASA/COBE

Inflation has been proposed as a reason for why the cosmic microwave background is as uniform as it is. We see small fluctuations in the CMB, but not large hot or cold spots.  This means the early Universe must have been small enough for temperatures to even out.  But the CMB is so uniform that the observable universe must have been much smaller than predicted by the big bang.  However, if the Universe experienced a rapid increase in size during its early moments, then everything would work out.  The only problem was we didn’t have any direct evidence of inflation.

Assuming these new results hold up, now we do.  Not only that, we know that inflation was stronger than we anticipated.  The strength of the gravitational waves is measured in a value known as r, where larger is stronger.  It was found that r = 0.2, which is much higher than anticipated.  Based upon earlier results from the Planck telescope, it was expected that r < 0.11.  So there seems to be a bit of tension with earlier findings.  There are ways in which this tension can be resolved, but just how is yet to be determined.

So this work still needs to be peer reviewed, and it needs to be confirmed by other experiments, and then the tension between this result and earlier results needs to be resolved.  There is still much to do before we really understand inflation.  But overall this is really big news, possibly even Nobel prize worthy.  The results are so strong that it seems pretty clear we have direct evidence of cosmic inflation, which is a huge step forward.  Before today we only had physical evidence back to when the universe was about a second old, at a time when nucleosynthesis occurred.  With this new result we are now able to probe the Universe when it was less than 10 trillion trillion trillionths of a second old.

Which is pretty amazing when you think about it.

 

Why the Asteroid Belt Doesn’t Threaten Spacecraft

Artist's impression of the asteroid belt. Image credit: NASA/JPL-Caltech

When you think of the asteroid belt, you probably imagine a region of rock and dust, with asteroids as far as the eye can see.  Such a visual has been popularized in movies, where spaceships must swerve left and right to avoid collisions.  But a similar view is often portrayed in more scientific imagery, such as the artistic rendering above.  Even the first episode of the new Cosmos series portrayed the belt as a dense collection of asteroids. But the reality is very different.  In reality the asteroid belt is less cluttered than often portrayed.  Just how much less might surprise you.

The Sloan digital sky survey (SDSS) has identified more than 100,000 asteroids in the solar system.  Not all of these lie within the asteroid belt, but there are about 80,000 asteroids in the belt larger than a kilometer.  Of course there are asteroids smaller than that, but they are more difficult to detect, so we aren’t exactly sure how many there are.

The pyramid-shaped zodiacal light cone is centered on the same path the sun and planets take across the sky called the ecliptic. This map shows the sky 90 minutes after sunset in early March facing west. Created with Stellarium
The pyramid-shaped zodiacal light cone is centered on the same path the sun and planets take across the sky called the ecliptic. This map shows the sky 90 minutes after sunset in early March facing west. Created with Stellarium

We have a pretty good idea, however, because the observations we have indicate that the size distribution of asteroids follows what is known as a power law distribution. For example, with a power law of 1, for every 100-meter wide asteroid there would be 10 with a diameter of 10 meters and 100 with a diameter of 1 meter. Based upon SDSS observations, asteroids seem to follow a power law of about 2, which means there are likely about 800 trillion asteroids larger than a meter within the belt. That’s a lot of rock. So much that sunlight scattering off the asteroid belt and other dust in the solar system is the source of zodiacal light.

But there is also a lot of volume within the asteroid belt. The belt can be said to occupy a region around the Sun from about 2.2 to 3.2 times the distance from the Earth to the Sun from the Sun (AU), with a thickness of about 1 AU. A bit of math puts that at about 50 trillion trillion cubic kilometers. So even though there are trillions of asteroids, each asteroid has billions of cubic kilometers of space on average. The asteroid belt is hardly something you would consider crowded. It should be emphasized that asteroids in the belt are not evenly distributed. They are clustered into families and groups. But even such clustering is not significant compared to the vast space it occupies.

An actual image from within the asteroid belt, taken from the NEAR probe as it was heading toward Eros (center). Credit: NASA
An actual image from within the asteroid belt, taken from the NEAR probe as it was heading toward Eros (center).
Credit: NASA

You can even do a very rough calculation to get an idea of just how empty the asteroid belt actually is. If we assumed that all the asteroids lay within a single plane, then on average there is 1 asteroid within an area roughly the size of Rhode Island. Within the entire United States there would be about 2000 asteroids, most of them only a meter across. The odds of seeing an asteroid along a cross-country road trip, much less hitting one, would be astoundingly small. So you can see why we don’t worry about space probes hitting an asteroid on their way to the outer solar system.  In fact, to get even close to an asteroid takes a great deal of effort.

Planck “Star” to Arise From Black Holes?

Artistic view of a radiating black hole. Credit: NASA

A new paper has been posted on the arxiv (a repository of research preprints) introducing the idea of a Planck star arising from a black hole.  These hypothetical objects wouldn’t be a star in the traditional sense, but rather the light emitted when a black hole dies at the hands of Hawking radiation.  The paper hasn’t been peer reviewed, but it presents an interesting idea and a possible observational test.

When a large star reaches the end of its life, it explodes as a supernova, which can cause its core to collapse into a black hole.  In the traditional model of a black hole, the material collapses down into an infinitesimal volume known as a singularity.  Of course this doesn’t take into account quantum theory.

Although we don’t have a complete theory of quantum gravity, we do know a few things.  One is that black holes shouldn’t last forever.  Because of quantum fluctuations near the event horizon of a black hole, a black hole will emit Hawking radiation.  As a result, a black hole will gradually lose mass as it radiates.  The amount of Hawking radiation it emits is inversely proportional to its size, so as the black hole gets smaller it will emit more and more Hawking radiation until it finally radiates completely away.

Because black holes don’t last forever, this has led Stephen Hawking and others to propose that black holes don’t have an event horizon, but rather an apparent horizon.  This would mean the material within a black hole would not collapse into a singularity, which is where this new paper comes in.

Diagram showing how matter approaches Planck density. Credit: Carlo Rovelli and Francesca Vidotto
Diagram showing how matter approaches Planck density. Credit: Carlo Rovelli and Francesca Vidotto

The authors propose that rather than collapsing into a singularity, the matter within a black hole will collapse until it is about a trillionth of a meter in size.  At that point its density would be on the order of the Planck density.  When the the black hole ends its life, this “Planck star” would be revealed.  Because this “star” would be at the Planck density, it would radiate at a specific wavelength of gamma rays.  So if they exist, a gamma ray telescope should be able to observe them.

Just to be clear, this is still pretty speculative.  So far there isn’t any observational evidence that such a Planck star exists.  It is, however, an interesting solution to the paradoxical side of black holes.

 

Why Hawking is Wrong About Black Holes

Artist rendering of a supermassive black hole. Credit: NASA / JPL-Caltech.

A recent paper by Stephen Hawking has created quite a stir, even leading Nature News to declare there are no black holes. As I wrote in an earlier post, that isn’t quite what Hawking claimed.  But it is now clear that Hawking’s claim about black holes is wrong because the paradox he tries to address isn’t a paradox after all.

It all comes down to what is known as the firewall paradox for black holes.  The central feature of a black hole is its event horizon.  The event horizon of a black hole is basically the point of no return when approaching a black hole.  In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape.  Cross the event horizon and you are forever trapped.

This one-way nature of an event horizon has long been a challenge to understanding gravitational physics.  For example, a black hole event horizon would seem to violate the laws of thermodynamics.  One of the principles of thermodynamics is that nothing should have a temperature of absolute zero.  Even very cold things radiate a little heat, but if a black hole traps light then it doesn’t give off any heat.  So a black hole would have a temperature of zero, which shouldn’t be possible.

Then in 1974 Stephen Hawking demonstrated that black holes do radiate light due to quantum mechanics. In quantum theory there are limits to what can be known about an object.  For example, you cannot know an object’s exact energy.  Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant.  What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).

While Hawking radiation solved one problem with black holes, it created another problem known as the firewall paradox.  When quantum particles appear in pairs, they are entangled, meaning that they are connected in a quantum way.  If one particle is captured by the black hole, and the other escapes, then the entangled nature of the pair is broken.  In quantum mechanics, we would say that the particle pair appears in a pure state, and the event horizon would seem to break that state.

Artist visualization of entangled particles. Credit: NIST.
Artist visualization of entangled particles. Credit: NIST.

Last year it was shown that if Hawking radiation is in a pure state, then either it cannot radiate in the way required by thermodynamics, or it would create a firewall of high energy particles near the surface of the event horizon.  This is often called the firewall paradox because according to general relativity if you happen to be near the event horizon of a black hole you shouldn’t notice anything unusual.  The fundamental idea of general relativity (the principle of equivalence) requires that if you are freely falling toward near the event horizon there shouldn’t be a raging firewall of high energy particles. In his paper, Hawking proposed a solution to this paradox by proposing that black holes don’t have event horizons.  Instead they have apparent horizons that don’t require a firewall to obey thermodynamics.  Hence the declaration of “no more black holes” in the popular press.

But the firewall paradox only arises if Hawking radiation is in a pure state, and a paper last month by Sabine Hossenfelder shows that Hawking radiation is not in a pure state.  In her paper, Hossenfelder shows that instead of being due to a pair of entangled particles, Hawking radiation is due to two pairs of entangled particles.  One entangled pair gets trapped by the black hole, while the other entangled pair escapes.  The process is similar to Hawking’s original proposal, but the Hawking particles are not in a pure state.

So there’s no paradox.  Black holes can radiate in a way that agrees with thermodynamics, and the region near the event horizon doesn’t have a firewall, just as general relativity requires.  So Hawking’s proposal is a solution to a problem that doesn’t exist.

What I’ve presented here is a very rough overview of the situation.  I’ve glossed over some of the more subtle aspects.  For a more detailed (and remarkably clear) overview check out Ethan Seigel’s post on his blog Starts With a Bang!  Also check out the post on Sabine Hossenfelder’s blog, Back Reaction, where she talks about the issue herself.

How We Know Gravity is Not (Just) a Force

This artist’s impression shows the exotic double object that consists of a tiny, but very heavy neutron star that spins 25 times each second, orbited every two and a half hours by a white dwarf star. The neutron star is a pulsar named PSR J0348+0432 that is giving off radio waves that can be picked up on Earth by radio telescopes. Although this unusual pair is very interesting in its own right it is also a unique laboratory for testing the limits of physical theories. This system is radiating gravitational radiation, ripples in spacetime. Although these waves cannot be yet detected directly by astronomers on Earth they can be detected indirectly by measuring the change in the orbit of the system as it loses energy. As the pulsar is so small the relative sizes of the two objects are not drawn to scale.

When  we think of gravity, we typically think of it as a force between masses.  When you step on a scale, for example, the number on the scale represents the pull of the Earth’s gravity on your mass, giving you weight.  It is easy to imagine the gravitational force of the Sun holding the planets in their orbits, or the gravitational pull of a black hole.  Forces are easy to understand as pushes and pulls.

But we now understand that gravity as a force is only part of a more complex phenomenon described the theory of general relativity.  While general relativity is an elegant theory, it’s a radical departure from the idea of gravity as a force.  As Carl Sagan once said, “Extraordinary claims require extraordinary evidence,” and Einstein’s theory is a very extraordinary claim.  But it turns out there are several extraordinary experiments that confirm the curvature of space and time.

The key to general relativity lies in the fact that everything in a gravitational field falls at the same rate.  Stand on the Moon and drop a hammer and a feather, and they will hit the surface at the same time.  The same is true for any object regardless of its mass or physical makeup, and this is known as the equivalence principle.

Since everything falls in the same way regardless of its mass, it means that without some external point of reference, a free-floating observer far from gravitational sources and a free-falling observer in the gravitational field of a massive body each have the same experience. For example, astronauts in the space station look as if they are floating without gravity.  Actually, the gravitational pull of the Earth on the space station is nearly as strong as it is at the surface.  The difference is that the space station (and everything in it) is falling.  The space station is in orbit, which means it is literally falling around the Earth.

The International Space Station orbiting Earth. Credit: NASA
The International Space Station orbiting Earth. Credit: NASA

This equivalence between floating and falling is what Einstein used to develop his theory.  In general relativity, gravity is not a force between masses.  Instead gravity is an effect of the warping of space and time in the presence of mass.  Without a force acting upon it, an object will move in a straight line.  If you draw a line on a sheet of paper, and then twist or bend the paper, the line will no longer appear straight.  In the same way, the straight path of an object is bent when space and time is bent.  This explains why all objects fall at the same rate.  The gravity warps spacetime in a particular way, so the straight paths of all objects are bent in the same way near the Earth.

So what kind of experiment could possibly prove that gravity is warped spacetime?  One stems from the fact that light can be deflected by a nearby mass.  It is often argued that since light has no mass, it shouldn’t be deflected by the gravitational force of a body.  This isn’t quite correct. Since light has energy, and by special relativity mass and energy are equivalent, Newton’s gravitational theory predicts that light would be deflected slightly by a nearby mass.  The difference is that general relativity predicts it will be deflected twice as much.

Description of Eddington's experiment from the Illustrated London News (1919).
Description of Eddington’s experiment from the Illustrated London News (1919).

The effect was first observed by Arthur Eddington in 1919.  Eddington traveled to the island of Principe off the coast of West Africa to photograph a total eclipse. He had taken photos of the same region of the sky sometime earlier. By comparing the eclipse photos and the earlier photos of the same sky, Eddington was able to show the apparent position of stars shifted when the Sun was near.  The amount of deflection agreed with Einstein, and not Newton.  Since then we’ve seen a similar effect where the light of distant quasars and galaxies are deflected by closer masses.  It is often referred to as gravitational lensing, and it has been used to measure the masses of galaxies, and even see the effects of dark matter.

Another piece of evidence is known as the time-delay experiment.  The mass of the Sun warps space near it, therefore light passing near the Sun is doesn’t travel in a perfectly straight line.  Instead it travels along a slightly curved path that is a bit longer.  This means light from a planet on the other side of the solar system from Earth reaches us a tiny bit later than we would otherwise expect.  The first measurement of this time delay was in the late 1960s by Irwin Shapiro.  Radio signals were bounced off Venus from Earth when the two planets were almost on opposite sides of the sun. The measured delay of the signals’ round trip was about 200 microseconds, just as predicted by general relativity.  This effect is now known as the Shapiro time delay, and it means the average speed of light (as determined by the travel time) is slightly slower than the (always constant) instantaneous speed of light.

A third effect is gravitational waves.  If stars warp space around them, then the motion of stars in a binary system should create ripples in spacetime, similar to the way swirling your finger in water can create ripples on the water’s surface.  As the gravity waves radiate away from the stars, they take away some of the energy from the binary system. This means that the two stars gradually move closer together, an effect known as inspiralling. As the two stars inspiral, their orbital period gets shorter because their orbits are getting smaller.

Decay of pulsar period compared to prediction (dashed curve).  Data from Hulse and Taylor, Plotted by the author.
Decay of pulsar period compared to prediction (dashed curve). Data from Hulse and Taylor, Plotted by the author.

For regular binary stars this effect is so small that we can’t observe it. However in 1974 two astronomers (Hulse and Taylor) discovered an interesting pulsar. Pulsars are rapidly rotating neutron stars that happen to radiate radio pulses in our direction. The pulse rate of pulsars are typically very, very regular. Hulse and Taylor noticed that this particular pulsar’s rate would speed up slightly then slow down slightly at a regular rate. They showed that this variation was due to the motion of the pulsar as it orbited a star. They were able to determine the orbital motion of the pulsar very precisely, calculating its orbital period to within a fraction of a second. As they observed their pulsar over the years, they noticed its orbital period was gradually getting shorter. The pulsar is inspiralling due to the radiation of gravity waves, just as predicted.

Illustration of Gravity Probe B.  Credit: Gravity Probe B Team, Stanford, NASA
Illustration of Gravity Probe B. Credit: Gravity Probe B Team, Stanford, NASA

Finally there is an effect known as frame dragging.  We have seen this effect near Earth itself.  Because the Earth is rotating, it not only curves spacetime by its mass, it twists spacetime around it due to its rotation.  This twisting of spacetime is known as frame dragging.  The effect is not very big near the Earth, but it can be measured through the Lense-Thirring effect.  Basically you put a spherical gyroscope in orbit, and see if its axis of rotation changes.  If there is no frame dragging, then the orientation of the gyroscope shouldn’t change.  If there is frame dragging, then the spiral twist of space and time will cause the gyroscope to precess, and its orientation will slowly change over time.

results_graph-lg
Gravity Probe B results. Credit: Gravity Probe B team, NASA.

We’ve actually done this experiment with a satellite known as Gravity Probe B, and you can see the results in the figure here.  As you can see, they agree very well.

Each of these experiments show that gravity is not simply a force between masses.  Gravity is instead an effect of space and time.  Gravity is built into the very shape of the universe.

Think on that the next time you step onto a scale.

Black Holes No More? Not Quite.

Where is the Nearest Black Hole
Artist concept of matter swirling around a black hole. (NASA/Dana Berry/SkyWorks Digital)

Nature News has announced that there are no black holes.  This claim is made by none other than Stephen Hawking, so does this mean black holes are no more?  It depends on whether Hawking’s new idea is right, and on what you mean be a black hole.  The claim is based on a new paper by Hawking  that argues the event horizon of a black hole doesn’t exist.

The event horizon of a black hole is basically the point of no return when approaching a black hole.  In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape.  Cross the event horizon and you can only move inward, never outward.  The problem with a one-way event horizon is that it leads to what is known as the information paradox.

Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.
Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.

The information paradox has its origin in thermodynamics, specifically the second law of thermodynamics.  In its simplest form it can be summarized as “heat flows from hot objects to cold objects”.  But the law is more useful when it is expressed in terms of entropy.  In this way it is stated as “the entropy of a system can never decrease.”  Many people interpret entropy as the level of disorder in a system, or the unusable part of a system.  That would mean things must always become less useful over time.  But entropy is really about the level of information you need to describe a system.  An ordered system (say, marbles evenly spaced in a grid) is easy to describe because the objects have simple relations to each other.  On the other hand, a disordered system (marbles randomly scattered) take more information to describe, because there isn’t a simple pattern to them.  So when the second law says that entropy can never decrease, it is say that the physical information of a system cannot decrease.  In other words, information cannot be destroyed.

The problem with event horizons is that you could toss an object (with a great deal of entropy) into a black hole, and the entropy would simply go away.  In other words, the entropy of the universe would get smaller, which would violate the second law of thermodynamics.  Of course this doesn’t take into account quantum effects, specifically what is known as Hawking radiation, which Stephen Hawking first proposed in 1974.

The original idea of Hawking radiation stems from the uncertainty principle in quantum theory.  In quantum theory there are limits to what can be known about an object.  For example, you cannot know an object’s exact energy.  Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant.  What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).

Hawking radiation near an event horizon. Credit: NAU.
Hawking radiation near an event horizon. Credit: NAU.

Because these quantum particles appear in pairs, they are “entangled” (connected in a quantum way).  This doesn’t matter much, unless you want Hawking radiation to radiate the information contained within the black hole.  In Hawking’s original formulation, the particles appeared randomly, so the radiation emanating from the black hole was purely random.  Thus Hawking radiation would not allow you to recover any trapped information.

To allow Hawking radiation to carry information out of the black hole, the entangled connection between particle pairs must be broken at the event horizon, so that the escaping particle can instead be entangled with the information-carrying matter within the black hole.  This breaking of the original entanglement would make the escaping particles appear as an intense “firewall” at the surface of the event horizon.  This would mean that anything falling toward the black hole wouldn’t make it into the black hole.  Instead it would be vaporized by Hawking radiation when it reached the event horizon.  It would seem then that either the physical information of an object is lost when it falls into a black hole (information paradox) or objects are vaporized before entering a black hole (firewall paradox).

In this new paper, Hawking proposes a different approach.  He argues that rather than instead of gravity warping space and time into an event horizon, the quantum fluctuations of Hawking radiation create a layer turbulence in that region.  So instead of a sharp event horizon, a black hole would have an apparent horizon that looks like an event horizon, but allows information to leak out.  Hawking argues that the turbulence would be so great that the information leaving a black hole would be so scrambled that it is effectively irrecoverable.

If Stephen Hawking is right, then it could solve the information/firewall paradox that has plagued theoretical physics.  Black holes would still exist in the astrophysics sense (the one in the center of our galaxy isn’t going anywhere) but they would lack event horizons.  It should be stressed that Hawking’s paper hasn’t been peer reviewed, and it is a bit lacking on details.  It is more of a presentation of an idea rather than a detailed solution to the paradox.  Further research will be needed to determine if this idea is the solution we’ve been looking for.

Why Einstein Will Never Be Wrong

Einstein Lecturing
Albert Einstein during a lecture in Vienna in 1921. Credit: National Library of Austria/F Schmutzer/Public Domain

One of the benefits of being an astrophysicist is your weekly email from someone who claims to have “proven Einstein wrong”. These either contain no mathematical equations and use phrases such as “it is obvious that..”, or they are page after page of complex equations with dozens of scientific terms used in non-traditional ways. They all get deleted pretty quickly, not because astrophysicists are too indoctrinated in established theories, but because none of them acknowledge how theories get replaced.

For example, in the late 1700s there was a theory of heat known as caloric. The basic idea of caloric was that it was a fluid that existed within materials. This fluid was self-repellant, meaning it would try to spread out as evenly as possible. We couldn’t observe this fluid directly, but the more caloric a material has the greater its temperature.

Ice-calorimeter
Ice-calorimeter from Antoine Lavoisier’s 1789 Elements of Chemistry. (Public Domain)

From this theory you get several predictions that actually work. Since you can’t create or destroy caloric, heat (energy) is conserved. If you put a cold object next to a hot object, the caloric in the hot object will spread out to the cold object until they reach the same temperature.  When air expands, the caloric is spread out more thinly, thus the temperature drops. When air is compressed there is more caloric per volume, and the temperature rises.

We now know there is no “heat fluid” known as caloric. Heat is a property of the motion (kinetic energy) of atoms or molecules in a material. So in physics we’ve dropped the caloric model in terms of kinetic theory. You could say we now know that the caloric model is completely wrong.

Except it isn’t. At least no more wrong than it ever was.

The basic assumption of a “heat fluid” doesn’t match reality, but the model makes predictions that are correct. In fact the caloric model works as well today as it did in the late 1700s. We don’t use it anymore because we have newer models that work better. Kinetic theory makes all the predictions caloric does and more. Kinetic theory even explains how the thermal energy of a material can be approximated as a fluid.

This is a key aspect of scientific theories. If you want to replace a robust scientific theory with a new one, the new theory must be able to do more than the old one. When you replace the old theory you now understand the limits of that theory and how to move beyond it.

In some cases even when an old theory is supplanted we continue to use it. Such an example can be seen in Newton’s law of gravity. When Newton proposed his theory of universal gravity in the 1600s, he described gravity as a force of attraction between all masses. This allowed for the correct prediction of the motion of the planets, the discovery of Neptune, the basic relation between a star’s mass and its temperature, and on and on. Newtonian gravity was and is a robust scientific theory.

Then in the early 1900s Einstein proposed a different model known as general relativity. The basic premise of this theory is that gravity is due to the curvature of space and time by masses.  Even though Einstein’s gravity model is radically different from Newton’s, the mathematics of the theory shows that Newton’s equations are approximate solutions to Einstein’s equations.  Everything Newton’s gravity predicts, Einstein’s does as well. But Einstein also allows us to correctly model black holes, the big bang, the precession of Mercury’s orbit, time dilation, and more, all of which have been experimentally validated.

So Einstein trumps Newton. But Einstein’s theory is much more difficult to work with than Newton’s, so often we just use Newton’s equations to calculate things. For example, the motion of satellites, or exoplanets. If we don’t need the precision of Einstein’s theory, we simply use Newton to get an answer that is “good enough.” We may have proven Newton’s theory “wrong”, but the theory is still as useful and accurate as it ever was.

Unfortunately, many budding Einsteins don’t understand this.

Binary waves from black holes. Image Credit: K. Thorne (Caltech) , T. Carnahan (NASA GSFC)
Binary waves from black holes. Image Credit: K. Thorne (Caltech) , T. Carnahan (NASA GSFC)

To begin with, Einstein’s gravity will never be proven wrong by a theory. It will be proven wrong by experimental evidence showing that the predictions of general relativity don’t work. Einstein’s theory didn’t supplant Newton’s until we had experimental evidence that agreed with Einstein and didn’t agree with Newton. So unless you have experimental evidence that clearly contradicts general relativity, claims of “disproving Einstein” will fall on deaf ears.

The other way to trump Einstein would be to develop a theory that clearly shows how Einstein’s theory is an approximation of your new theory, or how the experimental tests general relativity has passed are also passed by your theory.  Ideally, your new theory will also make new predictions that can be tested in a reasonable way.  If you can do that, and can present your ideas clearly, you will be listened to.  String theory and entropic gravity are examples of models that try to do just that.

But even if someone succeeds in creating a theory better than Einstein’s (and someone almost certainly will), Einstein’s theory will still be as valid as it ever was.  Einstein won’t have been proven wrong, we’ll simply understand the limits of his theory.