A new paper has been posted on the arxiv (a repository of research preprints) introducing the idea of a Planck star arising from a black hole. These hypothetical objects wouldn’t be a star in the traditional sense, but rather the light emitted when a black hole dies at the hands of Hawking radiation. The paper hasn’t been peer reviewed, but it presents an interesting idea and a possible observational test.
When a large star reaches the end of its life, it explodes as a supernova, which can cause its core to collapse into a black hole. In the traditional model of a black hole, the material collapses down into an infinitesimal volume known as a singularity. Of course this doesn’t take into account quantum theory.
Although we don’t have a complete theory of quantum gravity, we do know a few things. One is that black holes shouldn’t last forever. Because of quantum fluctuations near the event horizon of a black hole, a black hole will emit Hawking radiation. As a result, a black hole will gradually lose mass as it radiates. The amount of Hawking radiation it emits is inversely proportional to its size, so as the black hole gets smaller it will emit more and more Hawking radiation until it finally radiates completely away.
Because black holes don’t last forever, this has led Stephen Hawking and others to propose that black holes don’t have an event horizon, but rather an apparent horizon. This would mean the material within a black hole would not collapse into a singularity, which is where this new paper comes in.
The authors propose that rather than collapsing into a singularity, the matter within a black hole will collapse until it is about a trillionth of a meter in size. At that point its density would be on the order of the Planck density. When the the black hole ends its life, this “Planck star” would be revealed. Because this “star” would be at the Planck density, it would radiate at a specific wavelength of gamma rays. So if they exist, a gamma ray telescope should be able to observe them.
Just to be clear, this is still pretty speculative. So far there isn’t any observational evidence that such a Planck star exists. It is, however, an interesting solution to the paradoxical side of black holes.
We at Universe Today often hear theories purporting that Einstein is wrong, and perhaps one of the most common things cited is the speed limit for light used in his relativity theories. In a vacuum, light goes close to 300,000 km/s (roughly 186,000 miles a second). Using a bit of geometry, however, isn’t there a way to make it go faster? This video below shows why you’d think it would work that way, but it actually wouldn’t.
“There is a classic method where you shine a laser at the moon. If you can flick that beam across the moon’s surface in less than a hundredth of a second, which is not hard to do, then that laser spot will actually move across the surface of the moon faster than the speed of light,” says the host on this Veritasium video.
“In truth, nothing here is really travelling faster than the speed of light. The individual particles coming out of my laser, the photons, are still travelling to the moon at the speed of light. It’s just that they’re landing side by side in such quick succession that they form a spot that moves faster than the speed of light, but really, it’s just an illusion.”
There are way more ways that light can appear to move faster than the cosmic video, and you can check out more of those in the video.
A recent paper by Stephen Hawking has created quite a stir, even leading Nature News to declare there are no black holes. As I wrote in an earlier post, that isn’t quite what Hawking claimed. But it is now clear that Hawking’s claim about black holes is wrong because the paradox he tries to address isn’t a paradox after all.
It all comes down to what is known as the firewall paradox for black holes. The central feature of a black hole is its event horizon. The event horizon of a black hole is basically the point of no return when approaching a black hole. In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape. Cross the event horizon and you are forever trapped.
This one-way nature of an event horizon has long been a challenge to understanding gravitational physics. For example, a black hole event horizon would seem to violate the laws of thermodynamics. One of the principles of thermodynamics is that nothing should have a temperature of absolute zero. Even very cold things radiate a little heat, but if a black hole traps light then it doesn’t give off any heat. So a black hole would have a temperature of zero, which shouldn’t be possible.
Then in 1974 Stephen Hawking demonstrated that black holes do radiate light due to quantum mechanics. In quantum theory there are limits to what can be known about an object. For example, you cannot know an object’s exact energy. Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant. What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).
While Hawking radiation solved one problem with black holes, it created another problem known as the firewall paradox. When quantum particles appear in pairs, they are entangled, meaning that they are connected in a quantum way. If one particle is captured by the black hole, and the other escapes, then the entangled nature of the pair is broken. In quantum mechanics, we would say that the particle pair appears in a pure state, and the event horizon would seem to break that state.
Last year it was shown that if Hawking radiation is in a pure state, then either it cannot radiate in the way required by thermodynamics, or it would create a firewall of high energy particles near the surface of the event horizon. This is often called the firewall paradox because according to general relativity if you happen to be near the event horizon of a black hole you shouldn’t notice anything unusual. The fundamental idea of general relativity (the principle of equivalence) requires that if you are freely falling toward near the event horizon there shouldn’t be a raging firewall of high energy particles. In his paper, Hawking proposed a solution to this paradox by proposing that black holes don’t have event horizons. Instead they have apparent horizons that don’t require a firewall to obey thermodynamics. Hence the declaration of “no more black holes” in the popular press.
But the firewall paradox only arises if Hawking radiation is in a pure state, and a paper last month by Sabine Hossenfelder shows that Hawking radiation is not in a pure state. In her paper, Hossenfelder shows that instead of being due to a pair of entangled particles, Hawking radiation is due to two pairs of entangled particles. One entangled pair gets trapped by the black hole, while the other entangled pair escapes. The process is similar to Hawking’s original proposal, but the Hawking particles are not in a pure state.
So there’s no paradox. Black holes can radiate in a way that agrees with thermodynamics, and the region near the event horizon doesn’t have a firewall, just as general relativity requires. So Hawking’s proposal is a solution to a problem that doesn’t exist.
What I’ve presented here is a very rough overview of the situation. I’ve glossed over some of the more subtle aspects. For a more detailed (and remarkably clear) overview check out Ethan Seigel’s post on his blog Starts With a Bang! Also check out the post on Sabine Hossenfelder’s blog, Back Reaction, where she talks about the issue herself.
When we think of gravity, we typically think of it as a force between masses. When you step on a scale, for example, the number on the scale represents the pull of the Earth’s gravity on your mass, giving you weight. It is easy to imagine the gravitational force of the Sun holding the planets in their orbits, or the gravitational pull of a black hole. Forces are easy to understand as pushes and pulls.
But we now understand that gravity as a force is only part of a more complex phenomenon described the theory of general relativity. While general relativity is an elegant theory, it’s a radical departure from the idea of gravity as a force. As Carl Sagan once said, “Extraordinary claims require extraordinary evidence,” and Einstein’s theory is a very extraordinary claim. But it turns out there are several extraordinary experiments that confirm the curvature of space and time.
The key to general relativity lies in the fact that everything in a gravitational field falls at the same rate. Stand on the Moon and drop a hammer and a feather, and they will hit the surface at the same time. The same is true for any object regardless of its mass or physical makeup, and this is known as the equivalence principle.
Since everything falls in the same way regardless of its mass, it means that without some external point of reference, a free-floating observer far from gravitational sources and a free-falling observer in the gravitational field of a massive body each have the same experience. For example, astronauts in the space station look as if they are floating without gravity. Actually, the gravitational pull of the Earth on the space station is nearly as strong as it is at the surface. The difference is that the space station (and everything in it) is falling. The space station is in orbit, which means it is literally falling around the Earth.
This equivalence between floating and falling is what Einstein used to develop his theory. In general relativity, gravity is not a force between masses. Instead gravity is an effect of the warping of space and time in the presence of mass. Without a force acting upon it, an object will move in a straight line. If you draw a line on a sheet of paper, and then twist or bend the paper, the line will no longer appear straight. In the same way, the straight path of an object is bent when space and time is bent. This explains why all objects fall at the same rate. The gravity warps spacetime in a particular way, so the straight paths of all objects are bent in the same way near the Earth.
So what kind of experiment could possibly prove that gravity is warped spacetime? One stems from the fact that light can be deflected by a nearby mass. It is often argued that since light has no mass, it shouldn’t be deflected by the gravitational force of a body. This isn’t quite correct. Since light has energy, and by special relativity mass and energy are equivalent, Newton’s gravitational theory predicts that light would be deflected slightly by a nearby mass. The difference is that general relativity predicts it will be deflected twice as much.
The effect was first observed by Arthur Eddington in 1919. Eddington traveled to the island of Principe off the coast of West Africa to photograph a total eclipse. He had taken photos of the same region of the sky sometime earlier. By comparing the eclipse photos and the earlier photos of the same sky, Eddington was able to show the apparent position of stars shifted when the Sun was near. The amount of deflection agreed with Einstein, and not Newton. Since then we’ve seen a similar effect where the light of distant quasars and galaxies are deflected by closer masses. It is often referred to as gravitational lensing, and it has been used to measure the masses of galaxies, and even see the effects of dark matter.
Another piece of evidence is known as the time-delay experiment. The mass of the Sun warps space near it, therefore light passing near the Sun is doesn’t travel in a perfectly straight line. Instead it travels along a slightly curved path that is a bit longer. This means light from a planet on the other side of the solar system from Earth reaches us a tiny bit later than we would otherwise expect. The first measurement of this time delay was in the late 1960s by Irwin Shapiro. Radio signals were bounced off Venus from Earth when the two planets were almost on opposite sides of the sun. The measured delay of the signals’ round trip was about 200 microseconds, just as predicted by general relativity. This effect is now known as the Shapiro time delay, and it means the average speed of light (as determined by the travel time) is slightly slower than the (always constant) instantaneous speed of light.
A third effect is gravitational waves. If stars warp space around them, then the motion of stars in a binary system should create ripples in spacetime, similar to the way swirling your finger in water can create ripples on the water’s surface. As the gravity waves radiate away from the stars, they take away some of the energy from the binary system. This means that the two stars gradually move closer together, an effect known as inspiralling. As the two stars inspiral, their orbital period gets shorter because their orbits are getting smaller.
For regular binary stars this effect is so small that we can’t observe it. However in 1974 two astronomers (Hulse and Taylor) discovered an interesting pulsar. Pulsars are rapidly rotating neutron stars that happen to radiate radio pulses in our direction. The pulse rate of pulsars are typically very, very regular. Hulse and Taylor noticed that this particular pulsar’s rate would speed up slightly then slow down slightly at a regular rate. They showed that this variation was due to the motion of the pulsar as it orbited a star. They were able to determine the orbital motion of the pulsar very precisely, calculating its orbital period to within a fraction of a second. As they observed their pulsar over the years, they noticed its orbital period was gradually getting shorter. The pulsar is inspiralling due to the radiation of gravity waves, just as predicted.
Finally there is an effect known as frame dragging. We have seen this effect near Earth itself. Because the Earth is rotating, it not only curves spacetime by its mass, it twists spacetime around it due to its rotation. This twisting of spacetime is known as frame dragging. The effect is not very big near the Earth, but it can be measured through the Lense-Thirring effect. Basically you put a spherical gyroscope in orbit, and see if its axis of rotation changes. If there is no frame dragging, then the orientation of the gyroscope shouldn’t change. If there is frame dragging, then the spiral twist of space and time will cause the gyroscope to precess, and its orientation will slowly change over time.
We’ve actually done this experiment with a satellite known as Gravity Probe B, and you can see the results in the figure here. As you can see, they agree very well.
Each of these experiments show that gravity is not simply a force between masses. Gravity is instead an effect of space and time. Gravity is built into the very shape of the universe.
Think on that the next time you step onto a scale.
Nature News has announced that there are no black holes. This claim is made by none other than Stephen Hawking, so does this mean black holes are no more? It depends on whether Hawking’s new idea is right, and on what you mean be a black hole. The claim is based on a new paper by Hawking that argues the event horizon of a black hole doesn’t exist.
The event horizon of a black hole is basically the point of no return when approaching a black hole. In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape. Cross the event horizon and you can only move inward, never outward. The problem with a one-way event horizon is that it leads to what is known as the information paradox.
The information paradox has its origin in thermodynamics, specifically the second law of thermodynamics. In its simplest form it can be summarized as “heat flows from hot objects to cold objects”. But the law is more useful when it is expressed in terms of entropy. In this way it is stated as “the entropy of a system can never decrease.” Many people interpret entropy as the level of disorder in a system, or the unusable part of a system. That would mean things must always become less useful over time. But entropy is really about the level of information you need to describe a system. An ordered system (say, marbles evenly spaced in a grid) is easy to describe because the objects have simple relations to each other. On the other hand, a disordered system (marbles randomly scattered) take more information to describe, because there isn’t a simple pattern to them. So when the second law says that entropy can never decrease, it is say that the physical information of a system cannot decrease. In other words, information cannot be destroyed.
The problem with event horizons is that you could toss an object (with a great deal of entropy) into a black hole, and the entropy would simply go away. In other words, the entropy of the universe would get smaller, which would violate the second law of thermodynamics. Of course this doesn’t take into account quantum effects, specifically what is known as Hawking radiation, which Stephen Hawking first proposed in 1974.
The original idea of Hawking radiation stems from the uncertainty principle in quantum theory. In quantum theory there are limits to what can be known about an object. For example, you cannot know an object’s exact energy. Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant. What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).
Because these quantum particles appear in pairs, they are “entangled” (connected in a quantum way). This doesn’t matter much, unless you want Hawking radiation to radiate the information contained within the black hole. In Hawking’s original formulation, the particles appeared randomly, so the radiation emanating from the black hole was purely random. Thus Hawking radiation would not allow you to recover any trapped information.
To allow Hawking radiation to carry information out of the black hole, the entangled connection between particle pairs must be broken at the event horizon, so that the escaping particle can instead be entangled with the information-carrying matter within the black hole. This breaking of the original entanglement would make the escaping particles appear as an intense “firewall” at the surface of the event horizon. This would mean that anything falling toward the black hole wouldn’t make it into the black hole. Instead it would be vaporized by Hawking radiation when it reached the event horizon. It would seem then that either the physical information of an object is lost when it falls into a black hole (information paradox) or objects are vaporized before entering a black hole (firewall paradox).
In this new paper, Hawking proposes a different approach. He argues that rather than instead of gravity warping space and time into an event horizon, the quantum fluctuations of Hawking radiation create a layer turbulence in that region. So instead of a sharp event horizon, a black hole would have an apparent horizon that looks like an event horizon, but allows information to leak out. Hawking argues that the turbulence would be so great that the information leaving a black hole would be so scrambled that it is effectively irrecoverable.
If Stephen Hawking is right, then it could solve the information/firewall paradox that has plagued theoretical physics. Black holes would still exist in the astrophysics sense (the one in the center of our galaxy isn’t going anywhere) but they would lack event horizons. It should be stressed that Hawking’s paper hasn’t been peer reviewed, and it is a bit lacking on details. It is more of a presentation of an idea rather than a detailed solution to the paradox. Further research will be needed to determine if this idea is the solution we’ve been looking for.
It’s no mystery that the planets, moons, asteroids, etc. in the Solar System are arranged in a more-or-less flat, plate-like alignment in their orbits around the Sun.* But why is that? In a three-dimensional Universe, why should anything have a particular alignment at all? In yet another entertaining video from the folks at MinutePhysics, we see the reason behind this seemingly coincidental feature of our Solar System — and, for that matter, pretty much all planetary systems that have so far been discovered (not to mention planetary ring systems, accretion disks, many galaxies… well, you get the idea.) Check it out above.
One of the benefits of being an astrophysicist is your weekly email from someone who claims to have “proven Einstein wrong”. These either contain no mathematical equations and use phrases such as “it is obvious that..”, or they are page after page of complex equations with dozens of scientific terms used in non-traditional ways. They all get deleted pretty quickly, not because astrophysicists are too indoctrinated in established theories, but because none of them acknowledge how theories get replaced.
For example, in the late 1700s there was a theory of heat known as caloric. The basic idea of caloric was that it was a fluid that existed within materials. This fluid was self-repellant, meaning it would try to spread out as evenly as possible. We couldn’t observe this fluid directly, but the more caloric a material has the greater its temperature.
From this theory you get several predictions that actually work. Since you can’t create or destroy caloric, heat (energy) is conserved. If you put a cold object next to a hot object, the caloric in the hot object will spread out to the cold object until they reach the same temperature. When air expands, the caloric is spread out more thinly, thus the temperature drops. When air is compressed there is more caloric per volume, and the temperature rises.
We now know there is no “heat fluid” known as caloric. Heat is a property of the motion (kinetic energy) of atoms or molecules in a material. So in physics we’ve dropped the caloric model in terms of kinetic theory. You could say we now know that the caloric model is completely wrong.
Except it isn’t. At least no more wrong than it ever was.
The basic assumption of a “heat fluid” doesn’t match reality, but the model makes predictions that are correct. In fact the caloric model works as well today as it did in the late 1700s. We don’t use it anymore because we have newer models that work better. Kinetic theory makes all the predictions caloric does and more. Kinetic theory even explains how the thermal energy of a material can be approximated as a fluid.
This is a key aspect of scientific theories. If you want to replace a robust scientific theory with a new one, the new theory must be able to do more than the old one. When you replace the old theory you now understand the limits of that theory and how to move beyond it.
In some cases even when an old theory is supplanted we continue to use it. Such an example can be seen in Newton’s law of gravity. When Newton proposed his theory of universal gravity in the 1600s, he described gravity as a force of attraction between all masses. This allowed for the correct prediction of the motion of the planets, the discovery of Neptune, the basic relation between a star’s mass and its temperature, and on and on. Newtonian gravity was and is a robust scientific theory.
Then in the early 1900s Einstein proposed a different model known as general relativity. The basic premise of this theory is that gravity is due to the curvature of space and time by masses. Even though Einstein’s gravity model is radically different from Newton’s, the mathematics of the theory shows that Newton’s equations are approximate solutions to Einstein’s equations. Everything Newton’s gravity predicts, Einstein’s does as well. But Einstein also allows us to correctly model black holes, the big bang, the precession of Mercury’s orbit, time dilation, and more, all of which have been experimentally validated.
So Einstein trumps Newton. But Einstein’s theory is much more difficult to work with than Newton’s, so often we just use Newton’s equations to calculate things. For example, the motion of satellites, or exoplanets. If we don’t need the precision of Einstein’s theory, we simply use Newton to get an answer that is “good enough.” We may have proven Newton’s theory “wrong”, but the theory is still as useful and accurate as it ever was.
Unfortunately, many budding Einsteins don’t understand this.
To begin with, Einstein’s gravity will never be proven wrong by a theory. It will be proven wrong by experimental evidence showing that the predictions of general relativity don’t work. Einstein’s theory didn’t supplant Newton’s until we had experimental evidence that agreed with Einstein and didn’t agree with Newton. So unless you have experimental evidence that clearly contradicts general relativity, claims of “disproving Einstein” will fall on deaf ears.
The other way to trump Einstein would be to develop a theory that clearly shows how Einstein’s theory is an approximation of your new theory, or how the experimental tests general relativity has passed are also passed by your theory. Ideally, your new theory will also make new predictions that can be tested in a reasonable way. If you can do that, and can present your ideas clearly, you will be listened to. String theory and entropic gravity are examples of models that try to do just that.
But even if someone succeeds in creating a theory better than Einstein’s (and someone almost certainly will), Einstein’s theory will still be as valid as it ever was. Einstein won’t have been proven wrong, we’ll simply understand the limits of his theory.
One star player in this week’s findings out of the 223rd meeting of the American Astronomical Society has been the Nuclear Spectroscopic Telescope Array Mission, also known as NuSTAR. On Thursday, researchers revealed some exciting new results and images from the mission, as well as what we can expect from NuSTAR down the road.
NuSTAR was launched on June 13th, 2012 on a Pegasus XL rocket deployed from a Lockheed L-1011 “TriStar” aircraft flying near the Kwajalein Atoll in the middle of the Pacific Ocean.
Part of a new series of low-cost missions, NuSTAR is the first of its kind to employ a space telescope focusing on the high energy X-ray end of the spectrum centered around 5-80 KeV.
Daniel Stern, part of the NuSTAR team at JPL Caltech, revealed a new X-ray image of the now-famous supernova remnant dubbed “The Hand of God.” Discovered by the Einstein X-ray observatory in 1982, the Hand is home to pulsar PSR B1509-58 or B1509 for short, and sits about 18,000 light years away in the southern hemisphere constellation Circinus. B1509 spins about 7 times per second, and the supernova that formed the pulsar is estimated to have occurred 20,000 years ago and would’ve been visible form Earth about 2,000 years ago.
While the Chandra X-ray observatory has scrutinized the region before, NuSTAR can peer into its very heart. In fact, Stern notes that views from NuSTAR take on less of an appearance of a “Hand” and more of a “Fist”. Of course, the appearance of any nebula is a matter of perspective. Pareidolia litter the deep sky, whether it’s the Pillars of Creation to the Owl Nebula. We can’t help but being reminded of the mysterious “cosmic hand” that the Guardians of Oa of Green Lantern fame saw when they peered back at the moment of creation. Apparently, the “Hand” is also rather Simpson-esque, sporting only three “fingers!”
NuSTAR is the first, and so far only, focusing hard X-ray observatory deployed in orbit. NuSTAR employs what’s known as grazing incidence optics in a Wolter telescope configuration, and the concentric shells of the detector look like layers on an onion. NuSTAR also requires a large focal length, and employs a long boom that was deployed shortly after launch.
The hard X-ray regime that NuSTAR monitors is similar to what you encounter in your dentist’s office or in a TSA body scanner. Unlike the JEM-X monitor aboard ESA’s INTERGRAL or the Swift observatory, which have a broad resolution of about half a degree to a degree, NuSTAR has an unprecedented resolution of about 18 arc seconds.
The first data release from NuSTAR was in late 2013. NuSTAR is just begging to show its stuff, however, in terms of what researchers anticipate that it’s capable of.
“NuSTAR is uniquely able to map the Titanium-44 emission, which is a radioactive tracer of (supernova) explosion physics,” Daniel Stern told Universe Today.
NuSTAR will also be able to pinpoint high energy sources at the center of our galaxy. “No previous high-energy mission has had the imaging resolution of NuSTAR,” Stern told Universe Today. ”Our order-of-magnitude increase in image sharpness means that we’re able to map out that very rich region of the sky, which is populated by supernovae remnants, X-ray binaries, as well as the big black hole at the center of our Galaxy, Sagittarius A* (pronounced “A-star).”
Yale University researcher Francesca Civano also presented a new image from NuSTAR depicting black holes that were previously obscured from view. NuSTAR is especially suited for this, gazing into the hearts of energetic galaxies that are invisible to observatories such Chandra or XMM-Newton. The image presented covers the area of Hubble’s Cosmic Evolution Survey, known as COSMOS in the constellation Sextans. In fact, Civano notes that NuSTAR has already seen the highest number of obscured black hole candidates to date.
“This is a hot topic in astronomy,” Civano said in a recent press release. “We want to understand how black holes grew and the degree to which they are obscured.”
To this end, NuSTAR researchers are taking a stacked “wedding cake” approach, looking at successively larger slices of the sky from previous surveys. These include looking at the quarter degree field of the Great Observatories Origins Deep Survey (GOOD-S) for 18 days, the two degree wide COSMOS field for 36 days, and the large four degree Swift-BAT fields for 40 day periods hunting for serendipitous sources.
Interestingly, NuSTAR has also opened the window on the hard X-ray background that permeates the universe as well. This peaks in the 20-30 KeV range, and is the combination of the X-ray emissions of millions of black holes.
“For several decades already, we’ve known what the sum total emission of the sky is across the X-ray regime,” Stern told Universe Today. “The shape of this cosmic X-ray background peaks strongly in the NuSTAR range. The most likely interpretation is that there are a large number of obscured black holes out there, objects that are hard to find in other energy bands. NuSTAR should find these sources.”
And NuSTAR may just represent the beginning of a new era in X-ray astronomy. ESA is moving ahead with its next generation flagship X-ray mission, known as Athena+, set to launch sometime next decade. Ideas abound for wide-field imagers and X-ray polarimeters, and one day, we may see a successor to NuSTAR dubbed the High-Energy X-ray Probe or (HEX-P) make it into space.
But for now, expect some great science out of NuSTAR, as it unlocks the secrets of the X-ray universe!
Editor’s note: This article was originally published by Brian Koberlein on G+, and it is republished here with the author’s permission.
There’s a web post from the Nature website going around entitled “Simulations back up theory that Universe is a hologram.” It’s an interesting concept, but suffice it to say, the universe is not a hologram, certainly not in the way people think of holograms. So what is this “holographic universe” thing?
It all has to do with string theory. Although there currently isn’t any experimental evidence to support string theory, and some evidence pointing against it, it still garners a great deal of attention because of its perceived theoretical potential. One of the theoretical challenges of string theory is that it requires all these higher dimensions, which makes it difficult to work with.
In 1993, Gerard t’Hooft proposed what is now known as the holographic principle, which argued that the information contained within a region of space can be determined by the information at the surface that contains it. Mathematically, the space can be represented as a hologram of the surface that contains it.
That idea is not as wild as it sounds. For example, suppose there is a road 10 miles long, and its is “contained” by a start line and a finish line. Suppose the speed limit on this road is 60 mph, and I want to determine if a car has been speeding. One way I could do this is to watch a car the whole length of the road, measuring its speed the whole time. But another way is to simply measure when a car crosses the start line and finish line. At a speed of 60 mph, a car travels a mile a minute, so if the time between start and finish is less than 10 minutes, I know the car was speeding.
The holographic principle applies that idea to string theory. Just as its much easier to measure the start and finish times than constantly measure the speed of the car, it is much easier to do physics on the surface hologram than it is to do physics in the whole volume. The idea really took off when Juan Martín Maldacena derived what is known as the AdS/CFT correspondence (an arxiv version of his paper is here ), which uses the holographic principle to connect the strings of particle physics string theory with the geometry of general relativity.
While Maldacena made a compelling argument, it was a conjecture, not a formal proof. So there has been a lot of theoretical work trying to find such a proof. Now, two papers have come out (here and here) demonstrating that the conjecture works for a particular theoretical case. Of course the situation they examined was for a hypothetical universe, not a universe like ours. So this new work is really a mathematical test that proves the AdS/CFT correspondence for a particular situation.
From this you get a headline implying that we live in a hologram. On twitter, Ethan Siegel proposed a more sensible headline: “Important idea of string theory shown not to be mathematically inconsistent in one particular way”.
Quantum physics is a fascinating yet complicated subject to understand, and one of the things that freaks out physics students every is the concept of entanglement. That occurs when physicists attempt to measure the state of a particle and that affects the state of another particle instantly. (In reality, the particles are in multiple states — spinning in multiple directions, for example — and can only be said to be in one state or another when they are measured.)
“Spooky action at a distance” is how Albert Einstein reportedly referred to it. Here’s the new bit about this: Julian Sonner, a senior postdoctoral researcher at the Massachusetts Institute of Technology, led research showing that when two of these quarks are created, string theory creates a wormhole linking the quarks.
According to MIT, this could help researchers better understand the link between gravity (which takes place on a large scale) to quantum mechanics (which takes place on a very tiny scale). As MIT puts it, up to now it’s been very hard for physicists to “explain gravity in quantum-mechanical terms”, giving rise to a preoccupation of coming up with a single unifying theory for the universe. No luck yet, but many people believe it exists.
“There are some hard questions of quantum gravity we still don’t understand, and we’ve been banging our heads against these problems for a long time,” Sonner stated. “We need to find the right inroads to understanding these questions.”
Quantum entanglement sounds so foreign to our experience because it appears to exceed the speed of light, which violates Einstein’s general relativity. (The speed limit is still being tested, of course, which is why scientists were so excited when it appeared particles were moving faster than light in a 2011 experiment that was later debunked due to a faulty sensor.)
Anyway, this is how the new research proceeded:
– Sonner examined the work of Juan Maldacena of the Institute for Advanced Study and Leonard Susskind of Stanford University. The physicists were looking at how entangled black holes would behave. “When the black holes were entangled, then pulled apart, the theorists found that what emerged was a wormhole — a tunnel through space-time that is thought to be held together by gravity. The idea seemed to suggest that, in the case of wormholes, gravity emerges from the more fundamental phenomenon of entangled black holes,” MIT stated.
– Sonner then set about to create quarks to see if he could watch what happens when two are entangled with each other. Using an electric field, he was able to catch pairs of particles coming out of a vacuum environment with a few “transient” particles in it.
– Once he caught the particles, he mapped them in terms of space-time (four-dimensional space). Note: gravity is believed to be the fifth dimension because it can bend space-time, as you can see in these images of galaxies below.
– Sonner then tried to figure out what would happen in the fifth dimension when quarks were entangled in the fourth dimension, using a string theory concept called holographic duality. “While a hologram is a two-dimensional object, it contains all the information necessary to represent a three-dimensional view. Essentially, holographic duality is a way to derive a more complex dimension from the next lowest dimension,” MIT stated.
– And it was under holographic duality that Sonner found a wormhole would be created. The implication is that gravity itself may come out of entanglement of these particles, and that the bending we see in the universe would also be due to the entanglement.
“It’s the most basic representation yet that we have where entanglement gives rise to some sort of geometry,” Sonner stated. “What happens if some of this entanglement is lost, and what happens to the geometry? There are many roads that can be pursued, and in that sense, this work can turn out to be very helpful.”