Does Free Will Exist? Ancient Quasars May Hold the Clue.

Artist’s interpretation of ULAS J1120+0641, a very distant quasar with a supermassive black hole at its heart. Credit: ESO/M. Kornmesser
Artist’s interpretation of ULAS J1120+0641, a very distant quasar with a supermassive black hole at its heart. Credit: ESO/M. Kornmesser

Do you believe in free will? Are people able to decide their own destinies, whether it’s on what continent they’ll live, who or if they’ll marry, or just where they’ll get lunch today? Or are we just the unwitting pawns of some greater cosmic mechanism at work, ticking away the seconds and steering everyone and everything toward an inevitable, predetermined fate?

Philosophical debates aside, MIT researchers are actually looking to move past this age-old argument in their experiments once and for all, using some of the most distant and brilliant objects in the Universe.

Rather than ponder the ancient musings of Plato and Aristotle, researchers at MIT were trying to determine how to get past a more recent conundrum in physics: Bell’s Theorem. Proposed by Irish physicist John Bell in 1964, the principle attempts to come to terms with the behavior of “entangled” quantum particles separated by great distances but somehow affected simultaneously and instantaneously by the measurement of one or the other — previously referred to by Einstein as “spooky action at a distance.”

The problem with such spookiness in the quantum universe is that it seems to violate some very basic tenets of what we know about the macroscopic universe, such as information traveling faster than light. (A big no-no in physics.)

(Note: actual information is not transferred via quantum entanglement, but rather it’s the transfer of state between particles that can occur at thousands of times the speed of light.)

Read more: Spooky Experiment on ISS Could Pioneer New Quantum Communications Network

Then again, testing against Bell’s Theorem has resulted in its own weirdness (even as quantum research goes.) While some of the intrinsic “loopholes” in Bell’s Theorem have been sealed up, one odd suggestion remains on the table: what if a quantum-induced absence of free will (i.e., hidden variables) is conspiring to affect how researchers calibrate their detectors and collect data, somehow steering them toward a conclusion biased against classical physics?

“It sounds creepy, but people realized that’s a logical possibility that hasn’t been closed yet,” said David Kaiser, Germeshausen Professor of the History of Science and senior lecturer in the Department of Physics at MIT in Cambridge, Mass. “Before we make the leap to say the equations of quantum theory tell us the world is inescapably crazy and bizarre, have we closed every conceivable logical loophole, even if they may not seem plausible in the world we know today?”

What are Quasars
A color composite image of the quasar in HE0450-2958 obtained using the VISIR instrument on the Very Large Telescope and the Hubble Space Telescope. Image Credit: ESO

So in order to clear the air of any possible predestination by entangled interlopers, Kaiser and MIT postdoc Andrew Friedman, along with Jason Gallicchio of the University of Chicago, propose to look into the distant, early Universe for sufficiently unprejudiced parties: ancient quasars that have never, ever been in contact.

According to a news release from MIT:

…an experiment would go something like this: A laboratory setup would consist of a particle generator, such as a radioactive atom that spits out pairs of entangled particles. One detector measures a property of particle A, while another detector does the same for particle B. A split second after the particles are generated, but just before the detectors are set, scientists would use telescopic observations of distant quasars to determine which properties each detector will measure of a respective particle. In other words, quasar A determines the settings to detect particle A, and quasar B sets the detector for particle B.

By using the light from objects that came into existence just shortly after the Big Bang to calibrate their detectors, the team hopes to remove any possibility of entanglement… and determine what’s really in charge of the Universe.

“I think it’s fair to say this is the final frontier, logically speaking, that stands between this enormously impressive accumulated experimental evidence and the interpretation of that evidence saying the world is governed by quantum mechanics,” said Kaiser.

Then again, perhaps that’s exactly what they’re supposed to do…

The paper was published this week in the journal Physical Review Letters.

Source: MIT Media Relations

Want to read more about the admittedly complex subject of entanglement and hidden variables (which may or may not really have anything to do with where you eat lunch?) Click here.

Quantum Entanglement Explained

A frame from the 'Quantum Entanglement Animated' video, via QuantumFrontiers.com

Confused by how particles can be in two places at once? Wondering how particles can instantly communicate with each other no matter what the distance? Quantum physics is a field of study that defies common sense at every turn, and quantum entanglement might lead the way in the defying common sense department. Entanglement is the unusual behavior of elementary particles where they become linked so that when something happens to one, something happens to the other; no matter how far apart they are. This bizarre behavior of particles that become inextricably linked together is what Einstein supposedly called “spooky action at a distance.”

This new video from PHD Comics provides a combination of live action and animation to try to explain entanglement.

“Not surprisingly, it was really hard to draw this video,” says animator Jorge Cham. “How do you depict something that has never existed before? And more importantly, do you draw alligators differently from crocodiles?”
Yes, that sentence actually makes sense when it comes to entanglement. And the advice at the end of the video from physicist Jeff Kimble is applicable to entanglement — and life in general — as well: “If you know what you’re doing, don’t do it…”

NOvA Experiment Nabs Its First Neutrinos

The NUmI (Neutrinos from the Main Injector) horn at Fermilab, which fires protons that degrade into neutrinos. (Image: Caltech)

Neutrinos are some of the most abundant, curious, and elusive critters in particle physics. Incredibly lightweight — nigh massless, according to the Standard Model — as well as chargeless, they zip around the Universe at the speed of light and they don’t interact with any other particles. Some of them have been around since the Big Bang and, just as you’ve read this, trillions of them have passed through your body (and more are on the way.) But despite their ubiquitousness neutrinos are notoriously difficult to study precisely because they ignore pretty much everything made out of anything else. So it’s not surprising that weighing a neutrino isn’t as simple as politely asking one to step on a scale.

Thankfully particle physicists are a tenacious lot, including the ones at the U.S. Department of Energy’s Fermilab, and they aren’t giving up on their latest neutrino safari: the NuMI Off-Axis Electron Neutrino Appearance experiment, or NOvA. (Scientists represent neutrinos with the Greek letter nu, or v.) It’s a very small-game hunt to catch neutrinos on the fly, and it uses some very big equipment to do the job. And it’s already captured its first neutrinos — even before their setup is fully complete.

Created by smashing protons against graphite targets in Fermilab’s facility just outside Chicago, Illinois, resulting neutrinos are collected and shot out in a beam 500 miles northwest to the NOvA far detector in Ash River, Minnesota, located along the Canadian border. The very first beams were fired in Sept. 2013, while the Ash River facility was still under construction.

One of the first detections by NOvA of Fermilab-made neutrinos (Image courtesy of NOvA collaboration)
One of the first detections by NOvA of Fermilab-made neutrinos (Image courtesy of NOvA collaboration)

“That the first neutrinos have been detected even before the NOvA far detector installation is complete is a real tribute to everyone involved,” said University of Minnesota physicist Marvin Marshak, Ash River Laboratory director. “This early result suggests that the NOvA collaboration will make important contributions to our knowledge of these particles in the not so distant future.”

The 500-mile (800 km) path of the NOvA neutrino beam (Fermilab)
The 500-mile (800 km) subterranean path of the NOvA neutrino beam (Fermilab)

The beams from Fermilab are fired in two-second intervals, each sending billions of neutrinos directly toward the detectors. The near detector at Fermilab confirms the initial “flavor” of neutrinos in the beam, and the much larger far detector then determines if the neutrinos have changed during their three-millisecond underground interstate journey.

Again, because neutrinos don’t readily interact with ordinary particles, the beams can easily travel straight through the ground between the facilities — despite the curvature of the Earth. In fact the beam, which starts out 150 feet (45 meters) below ground near Chicago, eventually passes over 6 miles (10 km) deep during its trip.

According to a press release from Fermilab, neutrinos “come in three types, called flavors (electron, muon, or tau), and change between them as they travel. The two detectors of the NOvA experiment are placed so far apart to give the neutrinos the time to oscillate from one flavor to another while traveling at nearly the speed of light. Even though only a fraction of the experiment’s larger detector, called the far detector, is fully built, filled with scintillator and wired with electronics at this point, the experiment has already used it to record signals from its first neutrinos.”

The 50-foot (15 m) tall detector blocks are filled with a liquid scintillator that’s made of 95% mineral oil and 5% liquid hydrocarbon called pseudocumene, which is toxic but “imperative to the neutrino-detecting process.”  The mixture magnifies any light that hits it, allowing the neutrino strikes to be more easily detected and measured. (Source)

“NOvA represents a new generation of neutrino experiments,” said Fermilab Director Nigel Lockyer. “We are proud to reach this important milestone on our way to learning more about these fundamental particles.”

One of NOvA's 28 detectors  (Fermilab)
One of NOvA’s 28 far detector blocks (Fermilab)

After completion this summer NOvA’s near and far detectors will weigh 300 and 14,000 tons, respectively.

The goal of the NOvA experiment is to successfully capture and measure the masses of the different neutrino flavors and also determine if neutrinos are their own antiparticles (they could be the same, since they lack  specific charge.) By comparing the oscillations (i.e., flavor changes) of muon neutrino beams vs. muon antineutrino beams fired from Fermilab, scientists hope to determine their mass hierarchy — and ultimately discover why the Universe currently contains much more matter than antimatter.

Read more: Neutrino Detection Could Help Paint an Entirely New Picture of the Universe

Once the experiment is fully operational scientists expect to catch a precious few neutrinos every day — about 5,000 total over the course of its six-year run. Until then, they at least now have their first few on the books.

“Seeing neutrinos in the first modules of the detector in Minnesota is a major milestone. Now we can start doing physics.”
– Rick Tesarek, Fermilab physicist

Learn more about the development and construction of the NoVA experiment below:


(Video credit: Fermilab)

Find out more about the NOvA research goals here.

Source: Fermilab press release

The NOvA collaboration is made up of 208 scientists from 38 institutions in the United States, Brazil, the Czech Republic, Greece, India, Russia and the United Kingdom. The experiment receives funding from the U.S. Department of Energy, the National Science Foundation and other funding agencies.

Planck “Star” to Arise From Black Holes?

Artistic view of a radiating black hole. Credit: NASA

A new paper has been posted on the arxiv (a repository of research preprints) introducing the idea of a Planck star arising from a black hole.  These hypothetical objects wouldn’t be a star in the traditional sense, but rather the light emitted when a black hole dies at the hands of Hawking radiation.  The paper hasn’t been peer reviewed, but it presents an interesting idea and a possible observational test.

When a large star reaches the end of its life, it explodes as a supernova, which can cause its core to collapse into a black hole.  In the traditional model of a black hole, the material collapses down into an infinitesimal volume known as a singularity.  Of course this doesn’t take into account quantum theory.

Although we don’t have a complete theory of quantum gravity, we do know a few things.  One is that black holes shouldn’t last forever.  Because of quantum fluctuations near the event horizon of a black hole, a black hole will emit Hawking radiation.  As a result, a black hole will gradually lose mass as it radiates.  The amount of Hawking radiation it emits is inversely proportional to its size, so as the black hole gets smaller it will emit more and more Hawking radiation until it finally radiates completely away.

Because black holes don’t last forever, this has led Stephen Hawking and others to propose that black holes don’t have an event horizon, but rather an apparent horizon.  This would mean the material within a black hole would not collapse into a singularity, which is where this new paper comes in.

Diagram showing how matter approaches Planck density. Credit: Carlo Rovelli and Francesca Vidotto
Diagram showing how matter approaches Planck density. Credit: Carlo Rovelli and Francesca Vidotto

The authors propose that rather than collapsing into a singularity, the matter within a black hole will collapse until it is about a trillionth of a meter in size.  At that point its density would be on the order of the Planck density.  When the the black hole ends its life, this “Planck star” would be revealed.  Because this “star” would be at the Planck density, it would radiate at a specific wavelength of gamma rays.  So if they exist, a gamma ray telescope should be able to observe them.

Just to be clear, this is still pretty speculative.  So far there isn’t any observational evidence that such a Planck star exists.  It is, however, an interesting solution to the paradoxical side of black holes.

 

How A Laser Appears To Move Faster Than Light (And Why It Really Isn’t)

Gieren et al. used the 8.2-m Very Large Telescope (Yepun) to image M33, and deduce the distance to that galaxy (image credit: ESO).

We at Universe Today often hear theories purporting that Einstein is wrong, and perhaps one of the most common things cited is the speed limit for light used in his relativity theories. In a vacuum, light goes close to 300,000 km/s (roughly 186,000 miles a second). Using a bit of geometry, however, isn’t there a way to make it go faster? This video below shows why you’d think it would work that way, but it actually wouldn’t.

“There is a classic method where you shine a laser at the moon. If you can flick that beam across the moon’s surface in less than a hundredth of a second, which is not hard to do, then that laser spot will actually move across the surface of the moon faster than the speed of light,” says the host on this Veritasium video.

“In truth, nothing here is really travelling faster than the speed of light. The individual particles coming out of my laser, the photons, are still travelling to the moon at the speed of light. It’s just that they’re landing side by side in such quick succession that they form a spot that moves faster than the speed of light, but really, it’s just an illusion.”

There are way more ways that light can appear to move faster than the cosmic video, and you can check out more of those in the video.

Why Hawking is Wrong About Black Holes

Artist rendering of a supermassive black hole. Credit: NASA / JPL-Caltech.

A recent paper by Stephen Hawking has created quite a stir, even leading Nature News to declare there are no black holes. As I wrote in an earlier post, that isn’t quite what Hawking claimed.  But it is now clear that Hawking’s claim about black holes is wrong because the paradox he tries to address isn’t a paradox after all.

It all comes down to what is known as the firewall paradox for black holes.  The central feature of a black hole is its event horizon.  The event horizon of a black hole is basically the point of no return when approaching a black hole.  In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape.  Cross the event horizon and you are forever trapped.

This one-way nature of an event horizon has long been a challenge to understanding gravitational physics.  For example, a black hole event horizon would seem to violate the laws of thermodynamics.  One of the principles of thermodynamics is that nothing should have a temperature of absolute zero.  Even very cold things radiate a little heat, but if a black hole traps light then it doesn’t give off any heat.  So a black hole would have a temperature of zero, which shouldn’t be possible.

Then in 1974 Stephen Hawking demonstrated that black holes do radiate light due to quantum mechanics. In quantum theory there are limits to what can be known about an object.  For example, you cannot know an object’s exact energy.  Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant.  What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).

While Hawking radiation solved one problem with black holes, it created another problem known as the firewall paradox.  When quantum particles appear in pairs, they are entangled, meaning that they are connected in a quantum way.  If one particle is captured by the black hole, and the other escapes, then the entangled nature of the pair is broken.  In quantum mechanics, we would say that the particle pair appears in a pure state, and the event horizon would seem to break that state.

Artist visualization of entangled particles. Credit: NIST.
Artist visualization of entangled particles. Credit: NIST.

Last year it was shown that if Hawking radiation is in a pure state, then either it cannot radiate in the way required by thermodynamics, or it would create a firewall of high energy particles near the surface of the event horizon.  This is often called the firewall paradox because according to general relativity if you happen to be near the event horizon of a black hole you shouldn’t notice anything unusual.  The fundamental idea of general relativity (the principle of equivalence) requires that if you are freely falling toward near the event horizon there shouldn’t be a raging firewall of high energy particles. In his paper, Hawking proposed a solution to this paradox by proposing that black holes don’t have event horizons.  Instead they have apparent horizons that don’t require a firewall to obey thermodynamics.  Hence the declaration of “no more black holes” in the popular press.

But the firewall paradox only arises if Hawking radiation is in a pure state, and a paper last month by Sabine Hossenfelder shows that Hawking radiation is not in a pure state.  In her paper, Hossenfelder shows that instead of being due to a pair of entangled particles, Hawking radiation is due to two pairs of entangled particles.  One entangled pair gets trapped by the black hole, while the other entangled pair escapes.  The process is similar to Hawking’s original proposal, but the Hawking particles are not in a pure state.

So there’s no paradox.  Black holes can radiate in a way that agrees with thermodynamics, and the region near the event horizon doesn’t have a firewall, just as general relativity requires.  So Hawking’s proposal is a solution to a problem that doesn’t exist.

What I’ve presented here is a very rough overview of the situation.  I’ve glossed over some of the more subtle aspects.  For a more detailed (and remarkably clear) overview check out Ethan Seigel’s post on his blog Starts With a Bang!  Also check out the post on Sabine Hossenfelder’s blog, Back Reaction, where she talks about the issue herself.

How We Know Gravity is Not (Just) a Force

This artist’s impression shows the exotic double object that consists of a tiny, but very heavy neutron star that spins 25 times each second, orbited every two and a half hours by a white dwarf star. The neutron star is a pulsar named PSR J0348+0432 that is giving off radio waves that can be picked up on Earth by radio telescopes. Although this unusual pair is very interesting in its own right it is also a unique laboratory for testing the limits of physical theories. This system is radiating gravitational radiation, ripples in spacetime. Although these waves cannot be yet detected directly by astronomers on Earth they can be detected indirectly by measuring the change in the orbit of the system as it loses energy. As the pulsar is so small the relative sizes of the two objects are not drawn to scale.

When  we think of gravity, we typically think of it as a force between masses.  When you step on a scale, for example, the number on the scale represents the pull of the Earth’s gravity on your mass, giving you weight.  It is easy to imagine the gravitational force of the Sun holding the planets in their orbits, or the gravitational pull of a black hole.  Forces are easy to understand as pushes and pulls.

But we now understand that gravity as a force is only part of a more complex phenomenon described the theory of general relativity.  While general relativity is an elegant theory, it’s a radical departure from the idea of gravity as a force.  As Carl Sagan once said, “Extraordinary claims require extraordinary evidence,” and Einstein’s theory is a very extraordinary claim.  But it turns out there are several extraordinary experiments that confirm the curvature of space and time.

The key to general relativity lies in the fact that everything in a gravitational field falls at the same rate.  Stand on the Moon and drop a hammer and a feather, and they will hit the surface at the same time.  The same is true for any object regardless of its mass or physical makeup, and this is known as the equivalence principle.

Since everything falls in the same way regardless of its mass, it means that without some external point of reference, a free-floating observer far from gravitational sources and a free-falling observer in the gravitational field of a massive body each have the same experience. For example, astronauts in the space station look as if they are floating without gravity.  Actually, the gravitational pull of the Earth on the space station is nearly as strong as it is at the surface.  The difference is that the space station (and everything in it) is falling.  The space station is in orbit, which means it is literally falling around the Earth.

The International Space Station orbiting Earth. Credit: NASA
The International Space Station orbiting Earth. Credit: NASA

This equivalence between floating and falling is what Einstein used to develop his theory.  In general relativity, gravity is not a force between masses.  Instead gravity is an effect of the warping of space and time in the presence of mass.  Without a force acting upon it, an object will move in a straight line.  If you draw a line on a sheet of paper, and then twist or bend the paper, the line will no longer appear straight.  In the same way, the straight path of an object is bent when space and time is bent.  This explains why all objects fall at the same rate.  The gravity warps spacetime in a particular way, so the straight paths of all objects are bent in the same way near the Earth.

So what kind of experiment could possibly prove that gravity is warped spacetime?  One stems from the fact that light can be deflected by a nearby mass.  It is often argued that since light has no mass, it shouldn’t be deflected by the gravitational force of a body.  This isn’t quite correct. Since light has energy, and by special relativity mass and energy are equivalent, Newton’s gravitational theory predicts that light would be deflected slightly by a nearby mass.  The difference is that general relativity predicts it will be deflected twice as much.

Description of Eddington's experiment from the Illustrated London News (1919).
Description of Eddington’s experiment from the Illustrated London News (1919).

The effect was first observed by Arthur Eddington in 1919.  Eddington traveled to the island of Principe off the coast of West Africa to photograph a total eclipse. He had taken photos of the same region of the sky sometime earlier. By comparing the eclipse photos and the earlier photos of the same sky, Eddington was able to show the apparent position of stars shifted when the Sun was near.  The amount of deflection agreed with Einstein, and not Newton.  Since then we’ve seen a similar effect where the light of distant quasars and galaxies are deflected by closer masses.  It is often referred to as gravitational lensing, and it has been used to measure the masses of galaxies, and even see the effects of dark matter.

Another piece of evidence is known as the time-delay experiment.  The mass of the Sun warps space near it, therefore light passing near the Sun is doesn’t travel in a perfectly straight line.  Instead it travels along a slightly curved path that is a bit longer.  This means light from a planet on the other side of the solar system from Earth reaches us a tiny bit later than we would otherwise expect.  The first measurement of this time delay was in the late 1960s by Irwin Shapiro.  Radio signals were bounced off Venus from Earth when the two planets were almost on opposite sides of the sun. The measured delay of the signals’ round trip was about 200 microseconds, just as predicted by general relativity.  This effect is now known as the Shapiro time delay, and it means the average speed of light (as determined by the travel time) is slightly slower than the (always constant) instantaneous speed of light.

A third effect is gravitational waves.  If stars warp space around them, then the motion of stars in a binary system should create ripples in spacetime, similar to the way swirling your finger in water can create ripples on the water’s surface.  As the gravity waves radiate away from the stars, they take away some of the energy from the binary system. This means that the two stars gradually move closer together, an effect known as inspiralling. As the two stars inspiral, their orbital period gets shorter because their orbits are getting smaller.

Decay of pulsar period compared to prediction (dashed curve).  Data from Hulse and Taylor, Plotted by the author.
Decay of pulsar period compared to prediction (dashed curve). Data from Hulse and Taylor, Plotted by the author.

For regular binary stars this effect is so small that we can’t observe it. However in 1974 two astronomers (Hulse and Taylor) discovered an interesting pulsar. Pulsars are rapidly rotating neutron stars that happen to radiate radio pulses in our direction. The pulse rate of pulsars are typically very, very regular. Hulse and Taylor noticed that this particular pulsar’s rate would speed up slightly then slow down slightly at a regular rate. They showed that this variation was due to the motion of the pulsar as it orbited a star. They were able to determine the orbital motion of the pulsar very precisely, calculating its orbital period to within a fraction of a second. As they observed their pulsar over the years, they noticed its orbital period was gradually getting shorter. The pulsar is inspiralling due to the radiation of gravity waves, just as predicted.

Illustration of Gravity Probe B.  Credit: Gravity Probe B Team, Stanford, NASA
Illustration of Gravity Probe B. Credit: Gravity Probe B Team, Stanford, NASA

Finally there is an effect known as frame dragging.  We have seen this effect near Earth itself.  Because the Earth is rotating, it not only curves spacetime by its mass, it twists spacetime around it due to its rotation.  This twisting of spacetime is known as frame dragging.  The effect is not very big near the Earth, but it can be measured through the Lense-Thirring effect.  Basically you put a spherical gyroscope in orbit, and see if its axis of rotation changes.  If there is no frame dragging, then the orientation of the gyroscope shouldn’t change.  If there is frame dragging, then the spiral twist of space and time will cause the gyroscope to precess, and its orientation will slowly change over time.

results_graph-lg
Gravity Probe B results. Credit: Gravity Probe B team, NASA.

We’ve actually done this experiment with a satellite known as Gravity Probe B, and you can see the results in the figure here.  As you can see, they agree very well.

Each of these experiments show that gravity is not simply a force between masses.  Gravity is instead an effect of space and time.  Gravity is built into the very shape of the universe.

Think on that the next time you step onto a scale.

Black Holes No More? Not Quite.

Where is the Nearest Black Hole
Artist concept of matter swirling around a black hole. (NASA/Dana Berry/SkyWorks Digital)

Nature News has announced that there are no black holes.  This claim is made by none other than Stephen Hawking, so does this mean black holes are no more?  It depends on whether Hawking’s new idea is right, and on what you mean be a black hole.  The claim is based on a new paper by Hawking  that argues the event horizon of a black hole doesn’t exist.

The event horizon of a black hole is basically the point of no return when approaching a black hole.  In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape.  Cross the event horizon and you can only move inward, never outward.  The problem with a one-way event horizon is that it leads to what is known as the information paradox.

Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.
Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.

The information paradox has its origin in thermodynamics, specifically the second law of thermodynamics.  In its simplest form it can be summarized as “heat flows from hot objects to cold objects”.  But the law is more useful when it is expressed in terms of entropy.  In this way it is stated as “the entropy of a system can never decrease.”  Many people interpret entropy as the level of disorder in a system, or the unusable part of a system.  That would mean things must always become less useful over time.  But entropy is really about the level of information you need to describe a system.  An ordered system (say, marbles evenly spaced in a grid) is easy to describe because the objects have simple relations to each other.  On the other hand, a disordered system (marbles randomly scattered) take more information to describe, because there isn’t a simple pattern to them.  So when the second law says that entropy can never decrease, it is say that the physical information of a system cannot decrease.  In other words, information cannot be destroyed.

The problem with event horizons is that you could toss an object (with a great deal of entropy) into a black hole, and the entropy would simply go away.  In other words, the entropy of the universe would get smaller, which would violate the second law of thermodynamics.  Of course this doesn’t take into account quantum effects, specifically what is known as Hawking radiation, which Stephen Hawking first proposed in 1974.

The original idea of Hawking radiation stems from the uncertainty principle in quantum theory.  In quantum theory there are limits to what can be known about an object.  For example, you cannot know an object’s exact energy.  Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant.  What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).

Hawking radiation near an event horizon. Credit: NAU.
Hawking radiation near an event horizon. Credit: NAU.

Because these quantum particles appear in pairs, they are “entangled” (connected in a quantum way).  This doesn’t matter much, unless you want Hawking radiation to radiate the information contained within the black hole.  In Hawking’s original formulation, the particles appeared randomly, so the radiation emanating from the black hole was purely random.  Thus Hawking radiation would not allow you to recover any trapped information.

To allow Hawking radiation to carry information out of the black hole, the entangled connection between particle pairs must be broken at the event horizon, so that the escaping particle can instead be entangled with the information-carrying matter within the black hole.  This breaking of the original entanglement would make the escaping particles appear as an intense “firewall” at the surface of the event horizon.  This would mean that anything falling toward the black hole wouldn’t make it into the black hole.  Instead it would be vaporized by Hawking radiation when it reached the event horizon.  It would seem then that either the physical information of an object is lost when it falls into a black hole (information paradox) or objects are vaporized before entering a black hole (firewall paradox).

In this new paper, Hawking proposes a different approach.  He argues that rather than instead of gravity warping space and time into an event horizon, the quantum fluctuations of Hawking radiation create a layer turbulence in that region.  So instead of a sharp event horizon, a black hole would have an apparent horizon that looks like an event horizon, but allows information to leak out.  Hawking argues that the turbulence would be so great that the information leaving a black hole would be so scrambled that it is effectively irrecoverable.

If Stephen Hawking is right, then it could solve the information/firewall paradox that has plagued theoretical physics.  Black holes would still exist in the astrophysics sense (the one in the center of our galaxy isn’t going anywhere) but they would lack event horizons.  It should be stressed that Hawking’s paper hasn’t been peer reviewed, and it is a bit lacking on details.  It is more of a presentation of an idea rather than a detailed solution to the paradox.  Further research will be needed to determine if this idea is the solution we’ve been looking for.

Why Is the Solar System Flat?

It’s no mystery that the planets, moons, asteroids, etc. in the Solar System are arranged in a more-or-less flat, plate-like alignment in their orbits around the Sun.* But why is that? In a three-dimensional Universe, why should anything have a particular alignment at all? In yet another entertaining video from the folks at MinutePhysics, we see the reason behind this seemingly coincidental feature of our Solar System — and, for that matter, pretty much all planetary systems that have so far been discovered (not to mention planetary ring systems, accretion disks, many galaxies… well, you get the idea.) Check it out above.

Video by MinutePhysics. Created by Henry Reich
Continue reading “Why Is the Solar System Flat?”

Why Einstein Will Never Be Wrong

Einstein Lecturing
Albert Einstein during a lecture in Vienna in 1921. Credit: National Library of Austria/F Schmutzer/Public Domain

One of the benefits of being an astrophysicist is your weekly email from someone who claims to have “proven Einstein wrong”. These either contain no mathematical equations and use phrases such as “it is obvious that..”, or they are page after page of complex equations with dozens of scientific terms used in non-traditional ways. They all get deleted pretty quickly, not because astrophysicists are too indoctrinated in established theories, but because none of them acknowledge how theories get replaced.

For example, in the late 1700s there was a theory of heat known as caloric. The basic idea of caloric was that it was a fluid that existed within materials. This fluid was self-repellant, meaning it would try to spread out as evenly as possible. We couldn’t observe this fluid directly, but the more caloric a material has the greater its temperature.

Ice-calorimeter
Ice-calorimeter from Antoine Lavoisier’s 1789 Elements of Chemistry. (Public Domain)

From this theory you get several predictions that actually work. Since you can’t create or destroy caloric, heat (energy) is conserved. If you put a cold object next to a hot object, the caloric in the hot object will spread out to the cold object until they reach the same temperature.  When air expands, the caloric is spread out more thinly, thus the temperature drops. When air is compressed there is more caloric per volume, and the temperature rises.

We now know there is no “heat fluid” known as caloric. Heat is a property of the motion (kinetic energy) of atoms or molecules in a material. So in physics we’ve dropped the caloric model in terms of kinetic theory. You could say we now know that the caloric model is completely wrong.

Except it isn’t. At least no more wrong than it ever was.

The basic assumption of a “heat fluid” doesn’t match reality, but the model makes predictions that are correct. In fact the caloric model works as well today as it did in the late 1700s. We don’t use it anymore because we have newer models that work better. Kinetic theory makes all the predictions caloric does and more. Kinetic theory even explains how the thermal energy of a material can be approximated as a fluid.

This is a key aspect of scientific theories. If you want to replace a robust scientific theory with a new one, the new theory must be able to do more than the old one. When you replace the old theory you now understand the limits of that theory and how to move beyond it.

In some cases even when an old theory is supplanted we continue to use it. Such an example can be seen in Newton’s law of gravity. When Newton proposed his theory of universal gravity in the 1600s, he described gravity as a force of attraction between all masses. This allowed for the correct prediction of the motion of the planets, the discovery of Neptune, the basic relation between a star’s mass and its temperature, and on and on. Newtonian gravity was and is a robust scientific theory.

Then in the early 1900s Einstein proposed a different model known as general relativity. The basic premise of this theory is that gravity is due to the curvature of space and time by masses.  Even though Einstein’s gravity model is radically different from Newton’s, the mathematics of the theory shows that Newton’s equations are approximate solutions to Einstein’s equations.  Everything Newton’s gravity predicts, Einstein’s does as well. But Einstein also allows us to correctly model black holes, the big bang, the precession of Mercury’s orbit, time dilation, and more, all of which have been experimentally validated.

So Einstein trumps Newton. But Einstein’s theory is much more difficult to work with than Newton’s, so often we just use Newton’s equations to calculate things. For example, the motion of satellites, or exoplanets. If we don’t need the precision of Einstein’s theory, we simply use Newton to get an answer that is “good enough.” We may have proven Newton’s theory “wrong”, but the theory is still as useful and accurate as it ever was.

Unfortunately, many budding Einsteins don’t understand this.

Binary waves from black holes. Image Credit: K. Thorne (Caltech) , T. Carnahan (NASA GSFC)
Binary waves from black holes. Image Credit: K. Thorne (Caltech) , T. Carnahan (NASA GSFC)

To begin with, Einstein’s gravity will never be proven wrong by a theory. It will be proven wrong by experimental evidence showing that the predictions of general relativity don’t work. Einstein’s theory didn’t supplant Newton’s until we had experimental evidence that agreed with Einstein and didn’t agree with Newton. So unless you have experimental evidence that clearly contradicts general relativity, claims of “disproving Einstein” will fall on deaf ears.

The other way to trump Einstein would be to develop a theory that clearly shows how Einstein’s theory is an approximation of your new theory, or how the experimental tests general relativity has passed are also passed by your theory.  Ideally, your new theory will also make new predictions that can be tested in a reasonable way.  If you can do that, and can present your ideas clearly, you will be listened to.  String theory and entropic gravity are examples of models that try to do just that.

But even if someone succeeds in creating a theory better than Einstein’s (and someone almost certainly will), Einstein’s theory will still be as valid as it ever was.  Einstein won’t have been proven wrong, we’ll simply understand the limits of his theory.