NOvA Experiment Nabs Its First Neutrinos

The NUmI (Neutrinos from the Main Injector) horn at Fermilab, which fires protons that degrade into neutrinos. (Image: Caltech)

Neutrinos are some of the most abundant, curious, and elusive critters in particle physics. Incredibly lightweight — nigh massless, according to the Standard Model — as well as chargeless, they zip around the Universe at the speed of light and they don’t interact with any other particles. Some of them have been around since the Big Bang and, just as you’ve read this, trillions of them have passed through your body (and more are on the way.) But despite their ubiquitousness neutrinos are notoriously difficult to study precisely because they ignore pretty much everything made out of anything else. So it’s not surprising that weighing a neutrino isn’t as simple as politely asking one to step on a scale.

Thankfully particle physicists are a tenacious lot, including the ones at the U.S. Department of Energy’s Fermilab, and they aren’t giving up on their latest neutrino safari: the NuMI Off-Axis Electron Neutrino Appearance experiment, or NOvA. (Scientists represent neutrinos with the Greek letter nu, or v.) It’s a very small-game hunt to catch neutrinos on the fly, and it uses some very big equipment to do the job. And it’s already captured its first neutrinos — even before their setup is fully complete.

Created by smashing protons against graphite targets in Fermilab’s facility just outside Chicago, Illinois, resulting neutrinos are collected and shot out in a beam 500 miles northwest to the NOvA far detector in Ash River, Minnesota, located along the Canadian border. The very first beams were fired in Sept. 2013, while the Ash River facility was still under construction.

One of the first detections by NOvA of Fermilab-made neutrinos (Image courtesy of NOvA collaboration)
One of the first detections by NOvA of Fermilab-made neutrinos (Image courtesy of NOvA collaboration)

“That the first neutrinos have been detected even before the NOvA far detector installation is complete is a real tribute to everyone involved,” said University of Minnesota physicist Marvin Marshak, Ash River Laboratory director. “This early result suggests that the NOvA collaboration will make important contributions to our knowledge of these particles in the not so distant future.”

The 500-mile (800 km) path of the NOvA neutrino beam (Fermilab)
The 500-mile (800 km) subterranean path of the NOvA neutrino beam (Fermilab)

The beams from Fermilab are fired in two-second intervals, each sending billions of neutrinos directly toward the detectors. The near detector at Fermilab confirms the initial “flavor” of neutrinos in the beam, and the much larger far detector then determines if the neutrinos have changed during their three-millisecond underground interstate journey.

Again, because neutrinos don’t readily interact with ordinary particles, the beams can easily travel straight through the ground between the facilities — despite the curvature of the Earth. In fact the beam, which starts out 150 feet (45 meters) below ground near Chicago, eventually passes over 6 miles (10 km) deep during its trip.

According to a press release from Fermilab, neutrinos “come in three types, called flavors (electron, muon, or tau), and change between them as they travel. The two detectors of the NOvA experiment are placed so far apart to give the neutrinos the time to oscillate from one flavor to another while traveling at nearly the speed of light. Even though only a fraction of the experiment’s larger detector, called the far detector, is fully built, filled with scintillator and wired with electronics at this point, the experiment has already used it to record signals from its first neutrinos.”

The 50-foot (15 m) tall detector blocks are filled with a liquid scintillator that’s made of 95% mineral oil and 5% liquid hydrocarbon called pseudocumene, which is toxic but “imperative to the neutrino-detecting process.”  The mixture magnifies any light that hits it, allowing the neutrino strikes to be more easily detected and measured. (Source)

“NOvA represents a new generation of neutrino experiments,” said Fermilab Director Nigel Lockyer. “We are proud to reach this important milestone on our way to learning more about these fundamental particles.”

One of NOvA's 28 detectors  (Fermilab)
One of NOvA’s 28 far detector blocks (Fermilab)

After completion this summer NOvA’s near and far detectors will weigh 300 and 14,000 tons, respectively.

The goal of the NOvA experiment is to successfully capture and measure the masses of the different neutrino flavors and also determine if neutrinos are their own antiparticles (they could be the same, since they lack  specific charge.) By comparing the oscillations (i.e., flavor changes) of muon neutrino beams vs. muon antineutrino beams fired from Fermilab, scientists hope to determine their mass hierarchy — and ultimately discover why the Universe currently contains much more matter than antimatter.

Read more: Neutrino Detection Could Help Paint an Entirely New Picture of the Universe

Once the experiment is fully operational scientists expect to catch a precious few neutrinos every day — about 5,000 total over the course of its six-year run. Until then, they at least now have their first few on the books.

“Seeing neutrinos in the first modules of the detector in Minnesota is a major milestone. Now we can start doing physics.”
– Rick Tesarek, Fermilab physicist

Learn more about the development and construction of the NoVA experiment below:


(Video credit: Fermilab)

Find out more about the NOvA research goals here.

Source: Fermilab press release

The NOvA collaboration is made up of 208 scientists from 38 institutions in the United States, Brazil, the Czech Republic, Greece, India, Russia and the United Kingdom. The experiment receives funding from the U.S. Department of Energy, the National Science Foundation and other funding agencies.

Planck “Star” to Arise From Black Holes?

Artistic view of a radiating black hole. Credit: NASA

A new paper has been posted on the arxiv (a repository of research preprints) introducing the idea of a Planck star arising from a black hole.  These hypothetical objects wouldn’t be a star in the traditional sense, but rather the light emitted when a black hole dies at the hands of Hawking radiation.  The paper hasn’t been peer reviewed, but it presents an interesting idea and a possible observational test.

When a large star reaches the end of its life, it explodes as a supernova, which can cause its core to collapse into a black hole.  In the traditional model of a black hole, the material collapses down into an infinitesimal volume known as a singularity.  Of course this doesn’t take into account quantum theory.

Although we don’t have a complete theory of quantum gravity, we do know a few things.  One is that black holes shouldn’t last forever.  Because of quantum fluctuations near the event horizon of a black hole, a black hole will emit Hawking radiation.  As a result, a black hole will gradually lose mass as it radiates.  The amount of Hawking radiation it emits is inversely proportional to its size, so as the black hole gets smaller it will emit more and more Hawking radiation until it finally radiates completely away.

Because black holes don’t last forever, this has led Stephen Hawking and others to propose that black holes don’t have an event horizon, but rather an apparent horizon.  This would mean the material within a black hole would not collapse into a singularity, which is where this new paper comes in.

Diagram showing how matter approaches Planck density. Credit: Carlo Rovelli and Francesca Vidotto
Diagram showing how matter approaches Planck density. Credit: Carlo Rovelli and Francesca Vidotto

The authors propose that rather than collapsing into a singularity, the matter within a black hole will collapse until it is about a trillionth of a meter in size.  At that point its density would be on the order of the Planck density.  When the the black hole ends its life, this “Planck star” would be revealed.  Because this “star” would be at the Planck density, it would radiate at a specific wavelength of gamma rays.  So if they exist, a gamma ray telescope should be able to observe them.

Just to be clear, this is still pretty speculative.  So far there isn’t any observational evidence that such a Planck star exists.  It is, however, an interesting solution to the paradoxical side of black holes.

 

How A Laser Appears To Move Faster Than Light (And Why It Really Isn’t)

Gieren et al. used the 8.2-m Very Large Telescope (Yepun) to image M33, and deduce the distance to that galaxy (image credit: ESO).

We at Universe Today often hear theories purporting that Einstein is wrong, and perhaps one of the most common things cited is the speed limit for light used in his relativity theories. In a vacuum, light goes close to 300,000 km/s (roughly 186,000 miles a second). Using a bit of geometry, however, isn’t there a way to make it go faster? This video below shows why you’d think it would work that way, but it actually wouldn’t.

“There is a classic method where you shine a laser at the moon. If you can flick that beam across the moon’s surface in less than a hundredth of a second, which is not hard to do, then that laser spot will actually move across the surface of the moon faster than the speed of light,” says the host on this Veritasium video.

“In truth, nothing here is really travelling faster than the speed of light. The individual particles coming out of my laser, the photons, are still travelling to the moon at the speed of light. It’s just that they’re landing side by side in such quick succession that they form a spot that moves faster than the speed of light, but really, it’s just an illusion.”

There are way more ways that light can appear to move faster than the cosmic video, and you can check out more of those in the video.

Why Hawking is Wrong About Black Holes

Artist rendering of a supermassive black hole. Credit: NASA / JPL-Caltech.

A recent paper by Stephen Hawking has created quite a stir, even leading Nature News to declare there are no black holes. As I wrote in an earlier post, that isn’t quite what Hawking claimed.  But it is now clear that Hawking’s claim about black holes is wrong because the paradox he tries to address isn’t a paradox after all.

It all comes down to what is known as the firewall paradox for black holes.  The central feature of a black hole is its event horizon.  The event horizon of a black hole is basically the point of no return when approaching a black hole.  In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape.  Cross the event horizon and you are forever trapped.

This one-way nature of an event horizon has long been a challenge to understanding gravitational physics.  For example, a black hole event horizon would seem to violate the laws of thermodynamics.  One of the principles of thermodynamics is that nothing should have a temperature of absolute zero.  Even very cold things radiate a little heat, but if a black hole traps light then it doesn’t give off any heat.  So a black hole would have a temperature of zero, which shouldn’t be possible.

Then in 1974 Stephen Hawking demonstrated that black holes do radiate light due to quantum mechanics. In quantum theory there are limits to what can be known about an object.  For example, you cannot know an object’s exact energy.  Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant.  What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).

While Hawking radiation solved one problem with black holes, it created another problem known as the firewall paradox.  When quantum particles appear in pairs, they are entangled, meaning that they are connected in a quantum way.  If one particle is captured by the black hole, and the other escapes, then the entangled nature of the pair is broken.  In quantum mechanics, we would say that the particle pair appears in a pure state, and the event horizon would seem to break that state.

Artist visualization of entangled particles. Credit: NIST.
Artist visualization of entangled particles. Credit: NIST.

Last year it was shown that if Hawking radiation is in a pure state, then either it cannot radiate in the way required by thermodynamics, or it would create a firewall of high energy particles near the surface of the event horizon.  This is often called the firewall paradox because according to general relativity if you happen to be near the event horizon of a black hole you shouldn’t notice anything unusual.  The fundamental idea of general relativity (the principle of equivalence) requires that if you are freely falling toward near the event horizon there shouldn’t be a raging firewall of high energy particles. In his paper, Hawking proposed a solution to this paradox by proposing that black holes don’t have event horizons.  Instead they have apparent horizons that don’t require a firewall to obey thermodynamics.  Hence the declaration of “no more black holes” in the popular press.

But the firewall paradox only arises if Hawking radiation is in a pure state, and a paper last month by Sabine Hossenfelder shows that Hawking radiation is not in a pure state.  In her paper, Hossenfelder shows that instead of being due to a pair of entangled particles, Hawking radiation is due to two pairs of entangled particles.  One entangled pair gets trapped by the black hole, while the other entangled pair escapes.  The process is similar to Hawking’s original proposal, but the Hawking particles are not in a pure state.

So there’s no paradox.  Black holes can radiate in a way that agrees with thermodynamics, and the region near the event horizon doesn’t have a firewall, just as general relativity requires.  So Hawking’s proposal is a solution to a problem that doesn’t exist.

What I’ve presented here is a very rough overview of the situation.  I’ve glossed over some of the more subtle aspects.  For a more detailed (and remarkably clear) overview check out Ethan Seigel’s post on his blog Starts With a Bang!  Also check out the post on Sabine Hossenfelder’s blog, Back Reaction, where she talks about the issue herself.

How We Know Gravity is Not (Just) a Force

This artist’s impression shows the exotic double object that consists of a tiny, but very heavy neutron star that spins 25 times each second, orbited every two and a half hours by a white dwarf star. The neutron star is a pulsar named PSR J0348+0432 that is giving off radio waves that can be picked up on Earth by radio telescopes. Although this unusual pair is very interesting in its own right it is also a unique laboratory for testing the limits of physical theories. This system is radiating gravitational radiation, ripples in spacetime. Although these waves cannot be yet detected directly by astronomers on Earth they can be detected indirectly by measuring the change in the orbit of the system as it loses energy. As the pulsar is so small the relative sizes of the two objects are not drawn to scale.

When  we think of gravity, we typically think of it as a force between masses.  When you step on a scale, for example, the number on the scale represents the pull of the Earth’s gravity on your mass, giving you weight.  It is easy to imagine the gravitational force of the Sun holding the planets in their orbits, or the gravitational pull of a black hole.  Forces are easy to understand as pushes and pulls.

But we now understand that gravity as a force is only part of a more complex phenomenon described the theory of general relativity.  While general relativity is an elegant theory, it’s a radical departure from the idea of gravity as a force.  As Carl Sagan once said, “Extraordinary claims require extraordinary evidence,” and Einstein’s theory is a very extraordinary claim.  But it turns out there are several extraordinary experiments that confirm the curvature of space and time.

The key to general relativity lies in the fact that everything in a gravitational field falls at the same rate.  Stand on the Moon and drop a hammer and a feather, and they will hit the surface at the same time.  The same is true for any object regardless of its mass or physical makeup, and this is known as the equivalence principle.

Since everything falls in the same way regardless of its mass, it means that without some external point of reference, a free-floating observer far from gravitational sources and a free-falling observer in the gravitational field of a massive body each have the same experience. For example, astronauts in the space station look as if they are floating without gravity.  Actually, the gravitational pull of the Earth on the space station is nearly as strong as it is at the surface.  The difference is that the space station (and everything in it) is falling.  The space station is in orbit, which means it is literally falling around the Earth.

The International Space Station orbiting Earth. Credit: NASA
The International Space Station orbiting Earth. Credit: NASA

This equivalence between floating and falling is what Einstein used to develop his theory.  In general relativity, gravity is not a force between masses.  Instead gravity is an effect of the warping of space and time in the presence of mass.  Without a force acting upon it, an object will move in a straight line.  If you draw a line on a sheet of paper, and then twist or bend the paper, the line will no longer appear straight.  In the same way, the straight path of an object is bent when space and time is bent.  This explains why all objects fall at the same rate.  The gravity warps spacetime in a particular way, so the straight paths of all objects are bent in the same way near the Earth.

So what kind of experiment could possibly prove that gravity is warped spacetime?  One stems from the fact that light can be deflected by a nearby mass.  It is often argued that since light has no mass, it shouldn’t be deflected by the gravitational force of a body.  This isn’t quite correct. Since light has energy, and by special relativity mass and energy are equivalent, Newton’s gravitational theory predicts that light would be deflected slightly by a nearby mass.  The difference is that general relativity predicts it will be deflected twice as much.

Description of Eddington's experiment from the Illustrated London News (1919).
Description of Eddington’s experiment from the Illustrated London News (1919).

The effect was first observed by Arthur Eddington in 1919.  Eddington traveled to the island of Principe off the coast of West Africa to photograph a total eclipse. He had taken photos of the same region of the sky sometime earlier. By comparing the eclipse photos and the earlier photos of the same sky, Eddington was able to show the apparent position of stars shifted when the Sun was near.  The amount of deflection agreed with Einstein, and not Newton.  Since then we’ve seen a similar effect where the light of distant quasars and galaxies are deflected by closer masses.  It is often referred to as gravitational lensing, and it has been used to measure the masses of galaxies, and even see the effects of dark matter.

Another piece of evidence is known as the time-delay experiment.  The mass of the Sun warps space near it, therefore light passing near the Sun is doesn’t travel in a perfectly straight line.  Instead it travels along a slightly curved path that is a bit longer.  This means light from a planet on the other side of the solar system from Earth reaches us a tiny bit later than we would otherwise expect.  The first measurement of this time delay was in the late 1960s by Irwin Shapiro.  Radio signals were bounced off Venus from Earth when the two planets were almost on opposite sides of the sun. The measured delay of the signals’ round trip was about 200 microseconds, just as predicted by general relativity.  This effect is now known as the Shapiro time delay, and it means the average speed of light (as determined by the travel time) is slightly slower than the (always constant) instantaneous speed of light.

A third effect is gravitational waves.  If stars warp space around them, then the motion of stars in a binary system should create ripples in spacetime, similar to the way swirling your finger in water can create ripples on the water’s surface.  As the gravity waves radiate away from the stars, they take away some of the energy from the binary system. This means that the two stars gradually move closer together, an effect known as inspiralling. As the two stars inspiral, their orbital period gets shorter because their orbits are getting smaller.

Decay of pulsar period compared to prediction (dashed curve).  Data from Hulse and Taylor, Plotted by the author.
Decay of pulsar period compared to prediction (dashed curve). Data from Hulse and Taylor, Plotted by the author.

For regular binary stars this effect is so small that we can’t observe it. However in 1974 two astronomers (Hulse and Taylor) discovered an interesting pulsar. Pulsars are rapidly rotating neutron stars that happen to radiate radio pulses in our direction. The pulse rate of pulsars are typically very, very regular. Hulse and Taylor noticed that this particular pulsar’s rate would speed up slightly then slow down slightly at a regular rate. They showed that this variation was due to the motion of the pulsar as it orbited a star. They were able to determine the orbital motion of the pulsar very precisely, calculating its orbital period to within a fraction of a second. As they observed their pulsar over the years, they noticed its orbital period was gradually getting shorter. The pulsar is inspiralling due to the radiation of gravity waves, just as predicted.

Illustration of Gravity Probe B.  Credit: Gravity Probe B Team, Stanford, NASA
Illustration of Gravity Probe B. Credit: Gravity Probe B Team, Stanford, NASA

Finally there is an effect known as frame dragging.  We have seen this effect near Earth itself.  Because the Earth is rotating, it not only curves spacetime by its mass, it twists spacetime around it due to its rotation.  This twisting of spacetime is known as frame dragging.  The effect is not very big near the Earth, but it can be measured through the Lense-Thirring effect.  Basically you put a spherical gyroscope in orbit, and see if its axis of rotation changes.  If there is no frame dragging, then the orientation of the gyroscope shouldn’t change.  If there is frame dragging, then the spiral twist of space and time will cause the gyroscope to precess, and its orientation will slowly change over time.

results_graph-lg
Gravity Probe B results. Credit: Gravity Probe B team, NASA.

We’ve actually done this experiment with a satellite known as Gravity Probe B, and you can see the results in the figure here.  As you can see, they agree very well.

Each of these experiments show that gravity is not simply a force between masses.  Gravity is instead an effect of space and time.  Gravity is built into the very shape of the universe.

Think on that the next time you step onto a scale.

Black Holes No More? Not Quite.

Where is the Nearest Black Hole
Artist concept of matter swirling around a black hole. (NASA/Dana Berry/SkyWorks Digital)

Nature News has announced that there are no black holes.  This claim is made by none other than Stephen Hawking, so does this mean black holes are no more?  It depends on whether Hawking’s new idea is right, and on what you mean be a black hole.  The claim is based on a new paper by Hawking  that argues the event horizon of a black hole doesn’t exist.

The event horizon of a black hole is basically the point of no return when approaching a black hole.  In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape.  Cross the event horizon and you can only move inward, never outward.  The problem with a one-way event horizon is that it leads to what is known as the information paradox.

Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.
Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.

The information paradox has its origin in thermodynamics, specifically the second law of thermodynamics.  In its simplest form it can be summarized as “heat flows from hot objects to cold objects”.  But the law is more useful when it is expressed in terms of entropy.  In this way it is stated as “the entropy of a system can never decrease.”  Many people interpret entropy as the level of disorder in a system, or the unusable part of a system.  That would mean things must always become less useful over time.  But entropy is really about the level of information you need to describe a system.  An ordered system (say, marbles evenly spaced in a grid) is easy to describe because the objects have simple relations to each other.  On the other hand, a disordered system (marbles randomly scattered) take more information to describe, because there isn’t a simple pattern to them.  So when the second law says that entropy can never decrease, it is say that the physical information of a system cannot decrease.  In other words, information cannot be destroyed.

The problem with event horizons is that you could toss an object (with a great deal of entropy) into a black hole, and the entropy would simply go away.  In other words, the entropy of the universe would get smaller, which would violate the second law of thermodynamics.  Of course this doesn’t take into account quantum effects, specifically what is known as Hawking radiation, which Stephen Hawking first proposed in 1974.

The original idea of Hawking radiation stems from the uncertainty principle in quantum theory.  In quantum theory there are limits to what can be known about an object.  For example, you cannot know an object’s exact energy.  Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant.  What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).

Hawking radiation near an event horizon. Credit: NAU.
Hawking radiation near an event horizon. Credit: NAU.

Because these quantum particles appear in pairs, they are “entangled” (connected in a quantum way).  This doesn’t matter much, unless you want Hawking radiation to radiate the information contained within the black hole.  In Hawking’s original formulation, the particles appeared randomly, so the radiation emanating from the black hole was purely random.  Thus Hawking radiation would not allow you to recover any trapped information.

To allow Hawking radiation to carry information out of the black hole, the entangled connection between particle pairs must be broken at the event horizon, so that the escaping particle can instead be entangled with the information-carrying matter within the black hole.  This breaking of the original entanglement would make the escaping particles appear as an intense “firewall” at the surface of the event horizon.  This would mean that anything falling toward the black hole wouldn’t make it into the black hole.  Instead it would be vaporized by Hawking radiation when it reached the event horizon.  It would seem then that either the physical information of an object is lost when it falls into a black hole (information paradox) or objects are vaporized before entering a black hole (firewall paradox).

In this new paper, Hawking proposes a different approach.  He argues that rather than instead of gravity warping space and time into an event horizon, the quantum fluctuations of Hawking radiation create a layer turbulence in that region.  So instead of a sharp event horizon, a black hole would have an apparent horizon that looks like an event horizon, but allows information to leak out.  Hawking argues that the turbulence would be so great that the information leaving a black hole would be so scrambled that it is effectively irrecoverable.

If Stephen Hawking is right, then it could solve the information/firewall paradox that has plagued theoretical physics.  Black holes would still exist in the astrophysics sense (the one in the center of our galaxy isn’t going anywhere) but they would lack event horizons.  It should be stressed that Hawking’s paper hasn’t been peer reviewed, and it is a bit lacking on details.  It is more of a presentation of an idea rather than a detailed solution to the paradox.  Further research will be needed to determine if this idea is the solution we’ve been looking for.

Why Is the Solar System Flat?

It’s no mystery that the planets, moons, asteroids, etc. in the Solar System are arranged in a more-or-less flat, plate-like alignment in their orbits around the Sun.* But why is that? In a three-dimensional Universe, why should anything have a particular alignment at all? In yet another entertaining video from the folks at MinutePhysics, we see the reason behind this seemingly coincidental feature of our Solar System — and, for that matter, pretty much all planetary systems that have so far been discovered (not to mention planetary ring systems, accretion disks, many galaxies… well, you get the idea.) Check it out above.

Video by MinutePhysics. Created by Henry Reich
Continue reading “Why Is the Solar System Flat?”

Why Einstein Will Never Be Wrong

Einstein Lecturing
Albert Einstein during a lecture in Vienna in 1921. Credit: National Library of Austria/F Schmutzer/Public Domain

One of the benefits of being an astrophysicist is your weekly email from someone who claims to have “proven Einstein wrong”. These either contain no mathematical equations and use phrases such as “it is obvious that..”, or they are page after page of complex equations with dozens of scientific terms used in non-traditional ways. They all get deleted pretty quickly, not because astrophysicists are too indoctrinated in established theories, but because none of them acknowledge how theories get replaced.

For example, in the late 1700s there was a theory of heat known as caloric. The basic idea of caloric was that it was a fluid that existed within materials. This fluid was self-repellant, meaning it would try to spread out as evenly as possible. We couldn’t observe this fluid directly, but the more caloric a material has the greater its temperature.

Ice-calorimeter
Ice-calorimeter from Antoine Lavoisier’s 1789 Elements of Chemistry. (Public Domain)

From this theory you get several predictions that actually work. Since you can’t create or destroy caloric, heat (energy) is conserved. If you put a cold object next to a hot object, the caloric in the hot object will spread out to the cold object until they reach the same temperature.  When air expands, the caloric is spread out more thinly, thus the temperature drops. When air is compressed there is more caloric per volume, and the temperature rises.

We now know there is no “heat fluid” known as caloric. Heat is a property of the motion (kinetic energy) of atoms or molecules in a material. So in physics we’ve dropped the caloric model in terms of kinetic theory. You could say we now know that the caloric model is completely wrong.

Except it isn’t. At least no more wrong than it ever was.

The basic assumption of a “heat fluid” doesn’t match reality, but the model makes predictions that are correct. In fact the caloric model works as well today as it did in the late 1700s. We don’t use it anymore because we have newer models that work better. Kinetic theory makes all the predictions caloric does and more. Kinetic theory even explains how the thermal energy of a material can be approximated as a fluid.

This is a key aspect of scientific theories. If you want to replace a robust scientific theory with a new one, the new theory must be able to do more than the old one. When you replace the old theory you now understand the limits of that theory and how to move beyond it.

In some cases even when an old theory is supplanted we continue to use it. Such an example can be seen in Newton’s law of gravity. When Newton proposed his theory of universal gravity in the 1600s, he described gravity as a force of attraction between all masses. This allowed for the correct prediction of the motion of the planets, the discovery of Neptune, the basic relation between a star’s mass and its temperature, and on and on. Newtonian gravity was and is a robust scientific theory.

Then in the early 1900s Einstein proposed a different model known as general relativity. The basic premise of this theory is that gravity is due to the curvature of space and time by masses.  Even though Einstein’s gravity model is radically different from Newton’s, the mathematics of the theory shows that Newton’s equations are approximate solutions to Einstein’s equations.  Everything Newton’s gravity predicts, Einstein’s does as well. But Einstein also allows us to correctly model black holes, the big bang, the precession of Mercury’s orbit, time dilation, and more, all of which have been experimentally validated.

So Einstein trumps Newton. But Einstein’s theory is much more difficult to work with than Newton’s, so often we just use Newton’s equations to calculate things. For example, the motion of satellites, or exoplanets. If we don’t need the precision of Einstein’s theory, we simply use Newton to get an answer that is “good enough.” We may have proven Newton’s theory “wrong”, but the theory is still as useful and accurate as it ever was.

Unfortunately, many budding Einsteins don’t understand this.

Binary waves from black holes. Image Credit: K. Thorne (Caltech) , T. Carnahan (NASA GSFC)
Binary waves from black holes. Image Credit: K. Thorne (Caltech) , T. Carnahan (NASA GSFC)

To begin with, Einstein’s gravity will never be proven wrong by a theory. It will be proven wrong by experimental evidence showing that the predictions of general relativity don’t work. Einstein’s theory didn’t supplant Newton’s until we had experimental evidence that agreed with Einstein and didn’t agree with Newton. So unless you have experimental evidence that clearly contradicts general relativity, claims of “disproving Einstein” will fall on deaf ears.

The other way to trump Einstein would be to develop a theory that clearly shows how Einstein’s theory is an approximation of your new theory, or how the experimental tests general relativity has passed are also passed by your theory.  Ideally, your new theory will also make new predictions that can be tested in a reasonable way.  If you can do that, and can present your ideas clearly, you will be listened to.  String theory and entropic gravity are examples of models that try to do just that.

But even if someone succeeds in creating a theory better than Einstein’s (and someone almost certainly will), Einstein’s theory will still be as valid as it ever was.  Einstein won’t have been proven wrong, we’ll simply understand the limits of his theory.

New Findings from NuSTAR: A New X-Ray View of the “Hand of God” and More

The "Hand ( or Fist?) of God" nebula enshrouding pulsar PSR B1509-58. The upper red cloud structure is RCW 89. The image is a composite of Chandra observations (red & green), while NuSTAR observations are denoted in blue.

One star player in this week’s findings out of the 223rd meeting of the American Astronomical Society has been the Nuclear Spectroscopic Telescope Array Mission, also known as NuSTAR. On Thursday, researchers revealed some exciting new results and images from the mission, as well as what we can expect from NuSTAR down the road.

NuSTAR was launched on June 13th, 2012 on a Pegasus XL rocket deployed from a Lockheed L-1011 “TriStar” aircraft flying near the Kwajalein Atoll in the middle of the Pacific Ocean.

Part of a new series of low-cost missions, NuSTAR is the first of its kind to employ a space telescope focusing on the high energy X-ray end of the spectrum centered around 5-80 KeV.

Daniel Stern, part of the NuSTAR team at JPL Caltech, revealed a new X-ray image of the now-famous supernova remnant dubbed “The Hand of God.” Discovered by the Einstein X-ray observatory in 1982, the Hand is home to pulsar PSR B1509-58 or B1509 for short, and sits about 18,000 light years away in the southern hemisphere constellation Circinus. B1509 spins about 7 times per second, and the supernova that formed the pulsar is estimated to have occurred 20,000 years ago and would’ve  been visible form Earth about 2,000 years ago.

A diagram of the NuSTAR satellite. (NASA/JPL/Caltech)
A diagram of the NuSTAR satellite. (NASA/JPL/Caltech)

While the Chandra X-ray observatory has scrutinized the region before, NuSTAR can peer into its very heart. In fact, Stern notes that views from NuSTAR take on less of an appearance of a “Hand” and more of a “Fist”. Of course, the appearance of any nebula is a matter of perspective. Pareidolia litter the deep sky, whether it’s the Pillars of Creation to the Owl Nebula.  We can’t help but being reminded of the mysterious “cosmic hand” that the Guardians of Oa of Green Lantern fame saw when they peered back at the moment of creation. Apparently, the “Hand” is also rather Simpson-esque, sporting only three “fingers!”

Credit:
An diagram of the Hand of God. Credit: NASA/JPL/Caltech/McGill).

NuSTAR is the first, and so far only, focusing hard X-ray observatory deployed in orbit. NuSTAR employs what’s known as grazing incidence optics in a Wolter telescope configuration, and the concentric shells of the detector look like layers on an onion. NuSTAR also requires a large focal length, and employs a long boom that was deployed shortly after launch.

The hard X-ray regime that NuSTAR monitors is similar to what you encounter in your dentist’s office or in a TSA body scanner. Unlike the JEM-X monitor aboard ESA’s INTERGRAL or the Swift observatory, which have a broad resolution of about half a degree to a degree, NuSTAR has an unprecedented resolution of about 18 arc seconds.

The first data release from NuSTAR was in late 2013. NuSTAR is just begging to show its stuff, however, in terms of what researchers anticipate that it’s capable of.

“NuSTAR is uniquely able to map the Titanium-44 emission, which is a radioactive tracer of (supernova) explosion physics,” Daniel Stern told Universe Today.

NuSTAR will also be able to pinpoint high energy sources at the center of our galaxy. “No previous high-energy mission has had the imaging resolution of NuSTAR,” Stern told Universe Today. ”Our order-of-magnitude increase in image sharpness means that we’re able to map out that very rich region of the sky, which is populated by supernovae remnants, X-ray binaries, as well as the big black hole at the center of our Galaxy, Sagittarius A* (pronounced “A-star).”

NuSTAR identifies new black hole canidates (in blue) in the COSMOS field. Overlayed on previous black holes spotted by Chandra in the same field denoted in red and green. (Credit-NASA/JPL-Caltech/Yale University).
NuSTAR identifies new black hole candidates (in blue) in the COSMOS field. The discoveries in the image above are overlayed on previous black holes spotted by Chandra in the same field, which are denoted in red and green. (Credit-NASA/JPL-Caltech/Yale University).

Yale University researcher Francesca Civano also presented a new image from NuSTAR depicting black holes that were previously obscured from view.  NuSTAR is especially suited for this, gazing into the hearts of energetic galaxies that are invisible to observatories such Chandra or XMM-Newton. The image presented covers the area of Hubble’s Cosmic Evolution Survey, known as COSMOS in the constellation Sextans. In fact, Civano notes that NuSTAR has already seen the highest number of obscured black hole candidates to date.

“This is a hot topic in astronomy,” Civano said in a recent press release. “We want to understand how black holes grew and the degree to which they are obscured.”

To this end, NuSTAR researchers are taking a stacked “wedding cake” approach, looking at successively larger slices of the sky from previous surveys. These include looking at the quarter degree field of the Great Observatories Origins Deep Survey (GOOD-S) for 18 days, the two degree wide COSMOS field for 36 days, and the large four degree Swift-BAT fields for 40 day periods hunting for serendipitous sources.

Interestingly, NuSTAR has also opened the window on the hard X-ray background that permeates the universe as well. This peaks in the 20-30 KeV range, and is the combination of the X-ray emissions of millions of black holes.

“For several decades already, we’ve known what the sum total emission of the sky is across the X-ray regime,” Stern told Universe Today. “The shape of this cosmic X-ray background peaks strongly in the NuSTAR range. The most likely interpretation is that there are a large number of obscured black holes out there, objects that are hard to find in other energy bands. NuSTAR should find these sources.”

And NuSTAR may just represent the beginning of a new era in X-ray astronomy. ESA is moving ahead with its next generation flagship X-ray mission, known as Athena+, set to launch sometime next decade. Ideas abound for wide-field imagers and X-ray polarimeters, and one day, we may see a successor to NuSTAR dubbed the High-Energy X-ray Probe or (HEX-P) make it into space.

But for now, expect some great science out of NuSTAR, as it unlocks the secrets of the X-ray universe!

Why Our Universe is Not a Hologram

Superstrings may exist in 11 dimensions at once. Via National Institute of Technology Tiruchirappalli.

Editor’s note: This article was originally published by Brian Koberlein on G+, and it is republished here with the author’s permission.

There’s a web post from the Nature website going around entitled “Simulations back up theory that Universe is a hologram.” It’s an interesting concept, but suffice it to say, the universe is not a hologram, certainly not in the way people think of holograms. So what is this “holographic universe” thing?

It all has to do with string theory. Although there currently isn’t any experimental evidence to support string theory, and some evidence pointing against it, it still garners a great deal of attention because of its perceived theoretical potential. One of the theoretical challenges of string theory is that it requires all these higher dimensions, which makes it difficult to work with.

In 1993, Gerard t’Hooft proposed what is now known as the holographic principle, which argued that the information contained within a region of space can be determined by the information at the surface that contains it. Mathematically, the space can be represented as a hologram of the surface that contains it.

That idea is not as wild as it sounds. For example, suppose there is a road 10 miles long, and its is “contained” by a start line and a finish line. Suppose the speed limit on this road is 60 mph, and I want to determine if a car has been speeding. One way I could do this is to watch a car the whole length of the road, measuring its speed the whole time. But another way is to simply measure when a car crosses the start line and finish line. At a speed of 60 mph, a car travels a mile a minute, so if the time between start and finish is less than 10 minutes, I know the car was speeding.

A visualization of strings. Image credit: R. Dijkgraaf.
A visualization of strings. Image credit: R. Dijkgraaf.

The holographic principle applies that idea to string theory. Just as its much easier to measure the start and finish times than constantly measure the speed of the car, it is much easier to do physics on the surface hologram than it is to do physics in the whole volume. The idea really took off when Juan Martín Maldacena derived what is known as the AdS/CFT correspondence (an arxiv version of his paper is here ), which uses the holographic principle to connect the strings of particle physics string theory with the geometry of general relativity.

While Maldacena made a compelling argument, it was a conjecture, not a formal proof. So there has been a lot of theoretical work trying to find such a proof. Now, two papers have come out (here and here) demonstrating that the conjecture works for a particular theoretical case. Of course the situation they examined was for a hypothetical universe, not a universe like ours. So this new work is really a mathematical test that proves the AdS/CFT correspondence for a particular situation.

From this you get a headline implying that we live in a hologram. On twitter, Ethan Siegel proposed a more sensible headline: “Important idea of string theory shown not to be mathematically inconsistent in one particular way”.

Of course that would probably get less attention.