Hawking Radiation Replicated in a Laboratory?

In honor of Dr. Stephen Hawking, the COSMOS center will be creating the most detailed 3D mapping effort of the Universe to date. Credit: BBC, Illus.: T.Reyes

Dr. Stephen Hawking delivered a disturbing theory in 1974 that claimed black holes evaporate. He said black holes are not absolutely black and cold but rather radiate energy and do not last forever. So-called “Hawking radiation” became one of the physicist’s most famous theoretical predictions. Now, 40 years later, a researcher has announced the creation of a simulation of Hawking radiation in a laboratory setting.

The possibility of a black hole came from Einstein’s theory of General Relativity. Karl Schwarzchild in 1916 was the first to realize the possibility of a gravitational singularity with a boundary surrounding it at which light or matter entering cannot escape.

This month, Jeff Steinhauer from the Technion – Israel Institute of Technology, describes in his paper, “Observation of self-amplifying Hawking radiation in an analogue black-hole laser” in the journal Nature, how he created an analogue event horizon using a substance cooled to near absolute zero and using lasers was able to detect the emission of Hawking radiation. Could this be the first valid evidence of the existence of Hawking radiation and consequently seal the fate of all black holes?

This is not the first attempt at creating a Hawking radiation analogue in a laboratory. In 2010, an analogue was created from a block of glass, a laser, mirrors and a chilled detector (Phys. Rev. Letter, Sept 2010); no smoke accompanied the mirrors. The ultra-short pulse of intense laser light passing through the glass induced a refractive index perturbation (RIP) which functioned as an event horizon. Light was seen emitting from the RIP. Nevertheless, the results by F. Belgiorno et al. remain controversial. More experiments were still warranted.

The latest attempt at replicating Hawking radiation by Steinhauer takes a more high tech approach. He creates a Bose-Einstein condensate, an exotic state of matter at very near absolute zero temperature. Boundaries created within the condensate functioned as an event horizon. However, before going into further details, let us take a step back and consider what Steinhauer and others are trying to replicate.

Artists illustrations of black holes are guided by descriptions given from theorists. There are many illustrations. A black hole has never been seen up close. However, to have Hawking radiation all the theatrics of accretion disks and matter being funneled off a companion star are unnecessary. One just needs a black hole in the darkness of space. (Illustration: public domain)
Artists illustrations of black holes are guided by descriptions given to them by theorists. There are many illustrations. A black hole has never been seen up close. However, to have Hawking radiation, all the theatrics of accretion disks and matter being funneled off a companion star are unnecessary. Just a black hole in the darkness of space will do. (Illustration: public domain)

The recipe for the making Hawking radiation begins with a black hole. Any size black hole will do. Hawking’s theory states that smaller black holes will more rapidly radiate than larger ones and in the absence of matter falling into them – accretion, will “evaporate” much faster. Giant black holes can take longer than a million times the present age of the Universe to evaporate by way of Hawking radiation. Like a tire with a slow leak, most black holes would get you to the nearest repair station.

So you have a black hole. It has an event horizon. This horizon is also known as the Schwarzchild radius; light or matter checking into the event horizon can never check out. Or so this was the accepted understanding until Dr. Hawking’s theory upended it. And outside the event horizon is ordinary space with some caveats; consider it with some spices added. At the event horizon the force of gravity from the black hole is so extreme that it induces and magnifies quantum effects.

All of space – within us and surrounding us to the ends of the Universe includes a quantum vacuum. Everywhere in space’s quantum vacuum, virtual particle pairs are appearing and disappearing; immediately annihilating each other on extremely short time scales. With the extreme conditions at the event horizon, virtual particle and anti-particles pairs, such as, an electron and positron, are materializing. The ones that appear close enough to an event horizon can have one or the other virtual particle zapped up by the black holes gravity leaving only one particle which consequently is now free to add to the radiation emanating from around the black hole; the radiation that as a whole is what astronomers can use to detect the presence of a black hole but not directly observe it. It is the unpairing of virtual particles by the black hole at its event horizon that causes the Hawking radiation which by itself represents a net loss of mass from the black hole.

So why don’t astronomers just search in space for Hawking radiation? The problem is that the radiation is very weak and is overwhelmed by radiation produced by many other physical processes surrounding the black hole with an accretion disk. The radiation is drowned out by the chorus of energetic processes. So the most immediate possibility is to replicate Hawking radiation by using an analogue. While Hawking radiation is weak in comparison to the mass and energy of a black hole, the radiation has essentially all the time in the Universe to chip away at its parent body.

This is where the convergence of the growing understanding of black holes led to Dr. Hawking’s seminal work. Theorists including Hawking realized that despite the Quantum and Gravitational theory that is necessary to describe a black hole, black holes also behave like black bodies. They are governed by thermodynamics and are slaves to entropy. The production of Hawking radiation can be characterized as a thermodynamic process and this is what leads us back to the experimentalists. Other thermodynamic processes could be used to replicate the emission of this type of radiation.

Using the Bose-Einstein condensate in a vessel, Steinhauer directed laser beams into the delicate condensate to create an event horizon. Furthermore, his experiment creates sound waves that become trapped between two boundaries that define the event horizon. Steinhauer found that the sound waves at his analogue event horizon were amplified as happens to light in a common laser cavity but also as predicted by Dr. Hawking’s theory of black holes. Light escapes from the laser present at the analogue event horizon. Steinhauer  explains that this escaping light represents the long sought Hawking radiation.

Publication of this work in Nature underwent considerable peer review to be accepted but that alone does not validate his findings. Steinhauer’s work will now withstand even greater scrutiny. Others will attempt to duplicate his work. His lab setup is an analogue and it remains to be verified that what he is observing truly represents Hawking radiation.

References:

Observation of self-amplifying Hawking radiation in an analogue black-hole laser“, Nature Physics, 12 October 2014

“Hawking Radiation from Ultrashort Laser Pulse Filaments”, F. Belgiorno, et al., Phys. Rev. Letter, Sept 2010

“Black hole explosions?”, S. W. Hawking, et al., Nature, 01 March 1974

“The Quantum Mechanics of Black Holes”, S. W. Hawking, Scientific American, January 1977

Old Equations Shed New Light on Quasars

An artists illustration of the early Universe. Image Credit: NASA

There’s nothing more out of this world than quasi-stellar objects or more simply – quasars. These are the most powerful and among the most distant objects in the Universe. At their center is a black hole with the mass of a million or more Suns. And these powerhouses are fairly compact – about the size of our Solar System. Understanding how they came to be and how — or if — they evolve into the galaxies that surround us today are some of the big questions driving astronomers.

Now, a new paper by Yue Shen and Luis C. Ho – “The diversity of quasars unified by accretion and orientation” in the journal Nature confirms the importance of a mathematical derivation by the famous astrophysicist Sir Arthur Eddington during the first half of the 20th Century, in understanding not just stars but the properties of quasars, too. Ironically, Eddington did not believe black holes existed, but now his derivation, the Eddington Luminosity, can be used more reliably to determine important properties of quasars across vast stretches of space and time.

A quasar is recognized as an accreting (meaning- matter falling upon) super massive black hole at the center of an “active galaxy”. Most known quasars exist at distances that place them very early in the Universe; the most distant is at 13.9 billion light years, a mere 770 million years after the Big Bang. Somehow, quasars and the nascent galaxies surrounding them evolved into the galaxies present in the Universe today.  At their extreme distances, they are point-like, indistinguishable from a star except that the spectra of their light differ greatly from a star’s. Some would be as bright as our Sun if they were placed 33 light years away meaning that  they are over a trillion times more luminous than our star.

An artists illustration of the central engine of a Quasar. These "Quasi-stellar Objects" QSOs are now recognized as the super massive black holes at the center of emerging galaxies in the early Universe. (Photo Credit: NASA)
An artists illustration of the central engine of a quasar. These “Quasi-stellar Objects” QSOs are now recognized as the super massive black holes at the center of emerging galaxies in the early Universe. (Photo Credit: NASA)

The Eddington luminosity  defines the maximum luminosity that a star can exhibit that is in equilibrium; specifically, hydrostatic equilibrium. Extremely massive stars and black holes can exceed this limit but stars, to remain stable for long periods, are in hydrostatic equilibrium between their inward forces – gravity – and the outward electromagnetic forces. Such is the case of our star, the Sun, otherwise it would collapse or expand which in either case, would not have provided the stable source of light that has nourished life on Earth for billions of years.

Generally, scientific models often start simple, such as Bohr’s model of the hydrogen atom, and later observations can reveal intricacies that require more complex theory to explain, such as Quantum Mechanics for the atom. The Eddington luminosity and ratio could be compared to knowing the thermal efficiency and compression ratio of an internal combustion engine; by knowing such values, other properties follow.

Several other factors regarding the Eddington Luminosity are now known which are necessary to define the “modified Eddington luminosity” used today.

The new paper in Nature shows how the Eddington Luminosity helps understand the driving force behind the main sequence of quasars, and Shen and Ho call their work the missing definitive proof that quantifies the correlation of a quasar properties to a quasar’s Eddington ratio.

They used archival observational data to uncover the relationship between the strength of the optical Iron [Fe] and Oxygen[O III] emissions – strongly tied to the physical properties of the quasar’s central engine – a super-massive black hole, and the Eddington ratio. Their work provides the confidence and the correlations needed to move forward in our understanding of quasars and their relationship to the evolution of galaxies in the early Universe and up to our present epoch.

Astronomers have been studying quasars for a little over 50 years. Beginning in 1960, quasar discoveries began to accumulate but only through radio telescope observations. Then, a very accurate radio telescope measurement of Quasar 3C 273 was completed using a Lunar occultation. With this in hand, Dr. Maarten Schmidt of California Institute of Technology was able to identify the object in visible light using the 200 inch Palomar Telescope. Reviewing the strange spectral lines in its light, Schmidt reached the right conclusion that quasar spectra exhibit an extreme redshift and it was due to cosmological effects. The cosmological redshift of quasars meant that they are at a great distance from us in space and time. It also spelled the demise of the Steady-State theory of the Universe and gave further support to an expanding Universe that emanated from a singularity – the Big Bang.

Dr. Maarten Schmidt, Caltech University, with Donald Lynden-Bell, were the first recipients of the Kavli Prize in Astrophysics, “for their seminal contributions to understanding the nature of quasars”. While in high school, this author had the privilege to meet Dr. Schmidt at the Los Angeles Museum of Natural History after his presentation to a group of students. (Photo Credit: Caltech)
Dr. Maarten Schmidt, Caltech, with Donald Lynden-Bell, were the first recipients of the Kavli Prize in Astrophysics, “for their seminal contributions to understanding the nature of quasars”. While in high school, this author had the privilege to meet Dr. Schmidt at the Los Angeles Museum of Natural History after his presentation to a group of students. (Photo Credit: Caltech)

The researchers, Yue Shen and Luis C. Ho are from the Institute for Astronomy and Astrophysics at Peking University working with the Carnegie Observatories, Pasadena, California.

References and further reading:

“The diversity of quasars unified by accretion and orientation”, Yue Shen, Luis C. Ho, Sept 11, 2014, Nature

“What is a Quasar?”, Universe Today, Fraser Cain, August 12, 2013

“Interview with Maarten Schmidt”, Caltech Oral Histories, 1999

“Fifty Years of Quasars, a Symposium in honor of Maarten Schmidt”, Caltech, Sept 9, 2013

Comet Siding Spring: Close Call for Mars, Wake Up Call for Earth?

Five orbiters from India, the European Union and the United States will nestle behind the Mars as comet Siding Springs passes at a speed of 200,000 km/hr (125,000 mph). At right, Comet Shoemaker-Levy 9 impacts on Jupiter, the Chelyabinsk Asteroid over Russia. (Credits: NASA,ESA, ISRO)

It was 20 years ago this past July when images of Jupiter being pummeled by a comet caught the world’s attention. Comet Shoemaker-Levy 9 had flown too close to Jupiter. It was captured by the giant planet’s gravity and torn into a string of beads. One by one the comet fragments impacted Jupiter — leaving blemishes on its atmosphere, each several times larger than Earth in size.

Until that event, no one had seen a comet impact a planet. Now, Mars will see a very close passage of the comet Siding Spring on October 19th. When the comet was first discovered, astronomers quickly realized that it was heading straight at Mars. In fact, it appeared it was going to be a bulls-eye hit — except for the margin of error in calculating a comet’s trajectory from 1 billion kilometers (620 million miles, 7 AU) away.

It took several months of analysis for a cataclysmic impact on Mars to be ruled out. So now today, Mars faces just a cosmic close shave. But this comet packs enough energy that an impact would have globally altered Mars’ surface and atmosphere.

So what should we Earthlings gather from this and other events like it? Are we next? Why or why not should we be prepared for impacts from these mile wide objects?

For one, ask any dinosaur and you will have your answer.

Adding Siding Spring to the Comet 67P atop Los Angeles provides a rough comparison of sizes. This images was expanded upon U.T.'s Bob King - "What Comets, Parking Lots and Charcoal Have in Common". (Credit: ESA, anosmicovni)
An illustration of the Siding Spring comet in comparison to the Comet 67P atop Los Angeles. The original image was the focus of Bob King’s article – “What Comets, Parking Lots and Charcoal Have in Common“. (Credit: ESA, anosmicovni)

One can say that Mars was spared as were the five orbiting spacecraft from India (Mars Orbiter Mission), the European Union (Mars Express) and the United States (MOD, MRO, MAVEN). We have Scottish-Australian astronomer Robert McNaught to thank for discovering the comet on January 3, 2013, using the half meter (20 inch) Uppsala Southern Schmidt Telescope at Siding Spring, Australia.

Initially the margin of error in the trajectory was large, but a series of observations gradually reduced the error. By late summer 2014, Mars was in the clear and astronomers could confidently say the comet would pass close but not impact. Furthermore, as observations accumulated — including estimates of the outpouring of gases and dust — comet Siding Spring shrunk in size, i.e. the estimates of potentially tens of kilometers were down to now 700 meters (4/10th of a mile) in diameter. Estimates of the gas and dust production are low and the size of the tail and coma — the spherical gas cloud surrounding the solid body — are small and only the outer edge of both will interact with Mars’ atmosphere.

The mass, velocity and kinetic energy of celestial bodies can be deceiving. It is useful to compare the Siding Spring comet to common or man-made objects.
The mass, velocity and kinetic energy of celestial bodies can be deceiving. It is useful to compare the Siding Spring comet to common or man-made objects.

Yet, this is a close call for Mars. We could not rule out a collision for over six months. While this comet is small, it is moving relative to Mars at a speed of 200,000 kilometers/hour (125,000 mph, 56 km/sec). This small body packs a wallop. From high school science or intro college Physics, many of us know that the kinetic energy of an object increases by the square of the velocity. Double the velocity and the energy of the object goes up by 4, increase by 3 – energy increases by 9.

So the close shave for Mars is yet another wake up call for the “intelligent” space faring beings of the planet Earth. A wake up call because the close passage of a comet could have just as easily involved Earth. Astronomers would have warned the world of a comet heading straight for us, one that could wipe out 70% of all life as happened 65 million years ago to the dinosaurs. Replace dinosaur with humans and you have the full picture.

Time would have been of the essence. The space faring nations of the world — those of the EU, and Russia, the USA, Japan and others — would have gathered and attempted to conceive some spacecrafts with likely nuclear weapons that could be built and launched within a few months. Probably several vehicles with weapons would be launched at once, leaving Earth as soon as possible. Intercepting a comet or asteroid further out would give the impulse from the explosions more time to push the incoming body away from the Earth.

There is no way that humanity could sit on their collective hands and wait for astronomers to observe and measure for months until they could claim that it would just be a close call for Earth. We could imagine the panic it would cause. Recall the scenes from Carl Sagan’s movie Contact with people of every persuasion expressing at 120 decibels their hopes and fears. Even a small comet or asteroid, only a half kilometer – a third of a mile in diameter would be a cataclysmic event for Mars or Earth.

But yet, in the time that has since transpired from discovery of the comet Siding Spring (1/3/2013), the Chelyabinsk asteroid (~20 m/65 ft) exploded in an air burst that injured 1500 people in Russia. The telescope that discovered Comet Siding Spring was decommissioned in late 2013 and the Southern Near-Earth Object Survey was shutdown. This has left the southern skies without a dedicated telescope for finding near-Earth asteroids. And proposals such as the Sentinel project by the B612 Foundation remain underfunded.

We know of the dangers from small celestial bodies such as comets or asteroids. Government organizations in the United States and groups at the United Nations are discussing plans. There is plenty of time to find and protect the Earth but not necessarily time to waste.

Previous U.T. Siding Spring stories:
What Comets, Parking Lots and Charcoal Have in Common“, Bob King, Sept 5, 2014
MAVEN Mars Orbiter Ideally Poised to Uniquely Map Comet Siding Spring Composition
– Exclusive Interview with Principal Investigator Bruce Jakosky”, Ken Kremer“, Sept 5, 2014
NASA Preps for Nail-biting Comet Flyby of Mars“, BoB King, July 26,2014

Time Dilation Confirmed in the Lab

Blah.

It sounds like science fiction, but the time you experience between two events depends directly on the path you take through the universe. In other words, Einstein’s theory of special relativity postulates that a person traveling in a high-speed rocket would age more slowly than people back on Earth.

Although few physicists doubt Einstein was right, it’s crucial to verify time dilation to the best possible accuracy. Now, an international team of researchers, including Nobel laureate Theodor Hänsch, director of the Max Planck optics institute, has done just this.

Tests of special relativity date back to 1938. But once we started going to space regularly, we had to learn to deal with time dilation on a daily basis. GPS satellites, for example, are basically clocks in orbit. They travel at a whopping speed of 14,000 kilometers per hour well above the Earth’s surface at a distance of 20,000 kilometers. So relative to an atomic clock on the ground they lose about 7 microseconds per day, a number that has to be taken into account for them to work properly.

To test time dilation to a much higher precision, Benjamin Botermann of Johannes Gutenberg-University, Germany, and colleagues accelerated lithium ions to one-third the speed of light. Here the Doppler shift quickly comes into play. Any ions flying toward the observer will be blue shifted and any ions flying away from the observer will be red shifted.

The level at which the ions undergo a Doppler shift depends on their relative motion, with respect to the observer. But this also makes their clock run slow, which redshifts the light from the observer’s point of view — an effect that you should be able to measure in the lab.

So the team stimulated transitions in the ions using two lasers propagating in opposite directions. Then any shifts in the absorption frequency of the ions are dependent on the Doppler effect, which we can easily calculate, and the redshift due to time dilation.

The team verified their time dilation prediction to a few parts per billion, improving on previous limits. The findings were published on Sept. 16 in the journal Physical Review Letters.

A Fun Way of Understanding E=mc2

Einstein's Relativity, yet another momentous advancement for humanity brought forth from an ongoing mathematical dialogue. Image via Pixabay.

Many people fail to realize just how much energy there is locked up in matter. The nucleus of any atom is an oven of intense radiation, and when you open the oven door, that energy spills out; oftentimes violently. However, there is something even more intrinsic to this aspect of matter that escaped scientists for years.

It wasn’t until the brilliance of Albert Einstein that we were able to fully grasp this correlation between mass and energy. Enter E=mc2. This seemingly simple algebraic formula represents the correlation of energy to matter (energy equivalence of any given amount of mass). Many have heard of it, but not very many understand what it implies. Many people are unaware of just how much energy is contained within matter. So, for the next few minutes, I will attempt to convey to you the magnitude of your own personal potential energy equivalence.

First, we must break down this equation. What do each of the letters mean? What are their values? Let’s break it down from left to right:

Albert Einstein's Inventions
Albert Einstein. Image credit: Library of Congress

E represents the energy, which we measure in Joules. Joules is an SI measurement for energy and is measured as kilograms x meters squared per seconds squared [kg x m2/s2]. All this essentially means is that a Joule of energy is equal to the force used to move a specific object 1 meter in the same direction as the force.

m represents the mass of the specified object. For this equation, we measure mass in Kilograms (or 1000 grams).

c represents the speed of light. In a vacuum, light moves at 186,282 miles per second. However in science we utilize the SI (International System of Units), therefore we use measurements of meters and kilometers as opposed to feet and miles. So whenever we do our calculations for light, we use 3.00 × 108m/s, or rather 300,000,000 meters per second.

So essentially what the equation is saying is that for a specific amount of mass (in kilograms), if you multiply it by the speed of light squared (3.00×108)2, you get its energy equivalence (Joules). So, what does this mean? How can I relate to this, and how much energy is in matter? Well, here comes the fun part. We are about to conduct an experiment.

This isn’t one that we need fancy equipment for, nor is it one that we need a large laboratory for. All we need is simple math and our imagination. Now before I go on, I would like to point out that I am utilizing this equation in its most basic form. There are many more complex derivatives of this equation that are used for many different applications. It is also worth mentioning that when two atoms fuse (such as Hydrogen fusing into Helium in the core of our star) only about 0.7% of the mass is converted into total energy. For our purposes we needn’t worry about this, as I am simply illustrating the incredible amounts of energy that constitutes your equivalence in mass, not illustrating the fusion of all of your mass turning into energy.

Let’s begin by collecting the data so that we can input it into our equation. I weigh roughly 190 pounds. Again, as we use SI units in science, we need to convert this over from pounds to grams. Here is how we do this:

1 Josh = 190lbs
1 lbs = 453.6g
So 190lbs × 453.6g/1 lbs = 86,184g
So 1 Josh = 86,184g

Since our measurement for E is in Joules, and Joule units of measurement are kilograms x meters squared per seconds squared, I need to convert my mass in grams to my mass in kilograms. We do that this way:

86,184g × 1kg/1000g = 86.18kg.

So 1 Josh = 86.18kg.
Now that I’m in the right unit of measure for mass, we can plug the values into the equation and see just what we get:
E=mc2
E= (86.18kg)(3.00 × 108m/s)2
E= 7.76 × 1018 J

That looks like this: 7,760,000,000,000,000,000 or roughly 7.8 septillion Joules of energy.

Artistic rendition of energy released in an explosion. Via Pixabay.
Artistic rendition of energy released in an explosion. Via Pixabay.

This is an incredibly large amount of energy. However, it still seems very vague. What does that number mean? How much energy is that really? Well, let’s continue this experiment and find something that we can measure this against, to help put this amount of energy into perspective for us.

First, let’s convert our energy into an equivalent measurement. Something we can relate to. How does TNT sound? First, we must identify a common unit of measurement for TNT. The kiloton. Now we find out just how many kilotons of TNT are in 1 Joule. After doing a little searching I found a conversion ratio that will let us do just this:

1 Joule = 2.39 × 10-13 kilotons of explosives. Meaning that 1 Joule of energy is equal to .000000000000239 kilotons of TNT. That is a very small number. A better way to understand this relationship is to flip that ratio around to see how many Joules of energy is in 1 kiloton of TNT. 1 kiloton of TNT = 4.18×1012 Joules or rather 4,184,000,000,000 Joules.

Now that we have our conversion ratio, let’s do the math.

1 Josh (E) = 7.76 x 1018 J
7.76 x 1018 J x 1 kT TNT/ 4.18 x 1012 J = 1,856,459 kilotons of TNT.

Thus, concluding our little mind experiment we find that just one human being is roughly the equivalence of 1.86 MILLION kilotons of TNT worth of energy. Let’s now put that into perspective, just to illuminate the massive amount of power that this equivalence really is.

The bomb that destroyed Nagasaki in Japan during World War II was devastating. It leveled a city in seconds and brought the War in the Pacific to a close. That bomb was approximately 21 kilotons of explosives. So that means that I, 1 human being, have 88,403 times more explosive energy in me than a bomb that destroyed an entire city… and that goes for every human being.

So when you hear someone tell you that you’ve got real potential, just reply that they have no idea….

Hydrogen Bomb Blast. Image via Pixabay.
Hydrogen Bomb Blast. Image via Pixabay.

There Are No Such Things As Black Holes

UNC-Chapel Hill physics professor Laura Mersini-Houghton has proven mathematically that black holes don't exist. (Source: unc.edu)

That’s the conclusion reached by one researcher from the University of North Carolina: black holes can’t exist in our Universe — not mathematically, anyway.

“I’m still not over the shock,” said Laura Mersini-Houghton, associate physics professor at UNC-Chapel Hill. “We’ve been studying this problem for a more than 50 years and this solution gives us a lot to think about.”

In a news article spotlighted by UNC the scenario suggested by Mersini-Houghton is briefly explained. Basically, when a massive star reaches the end of its life and collapses under its own gravity after blasting its outer layers into space — which is commonly thought to result in an ultra-dense point called a singularity surrounded by a light- and energy-trapping event horizon — it undergoes a period of intense outgoing radiation (the sort of which was famously deduced by Stephen Hawking.) This release of radiation is enough, Mersini-Houghton has calculated, to cause the collapsing star to lose too much mass to allow a singularity to form. No singularity means no event horizon… and no black hole.

Artist's conception of the event horizon of a black hole. Credit: Victor de Schwanberg/Science Photo Library
Artist’s conception of the event horizon of a black hole. Credit: Victor de Schwanberg/Science Photo Library

At least, not by her numbers.

Read more: How Do Black Holes Form?

So what does happen to massive stars when they die? Rather than falling ever inwards to create an infinitely dense point hidden behind a space-time “firewall” — something that, while fascinating to ponder and a staple of science fiction, has admittedly been notoriously tricky for scientists to reconcile with known physics — Mersini-Houghton suggests that they just “probably blow up.” (Source)

According to the UNC article Mersini-Houghton’s research “not only forces scientists to reimagine the fabric of space-time, but also rethink the origins of the universe.”

Hm.

The submitted papers on this research are publicly available on arXiv.org and can be found here and here.

Read more: What Would It Be Like To Fall Into a Black Hole?

Don’t believe it? I’m not surprised. I’m certainly no physicist but I do expect that there will be many scientists (and layfolk) who’ll have their own take on Mersini-Houghton’s findings (*ahem* Brian Koberlein*) especially considering 1. the popularity of black holes in astronomical culture, and 2. the many — scratch that; the countlessobservations that have been made on quite black hole-ish objects found throughout the Universe.

So what do you think? Have black holes just been voted off the cosmic island? Or are the holes more likely in the research? Share your thoughts in the comments!

Want to hear more from Mersini-Houghton herself? Here’s a link to a video explaining her view of why event horizons and singularities might simply be a myth.

Source: UNC-Chapel Hill. HT to Marco Iozzi on the Google+ Space Community (join us!)

Of course this leads me to ask: if there really are “no black holes” then what’s causing the stars in the center of our galaxy to move like this?

*Added Sept. 25: I knew Brian wouldn’t disappoint! Read his post on why “Yes, Virginia, There Are Black Holes.”

How Watching 13 Billion Years Of Cosmic Growth Links To Storytelling

Screenshot of a simulation of how the universe's dark matter and gas grew in its first 13 billion years. Credit: Harvard-Smithsonian Center for Astrophysics / YouTube

How do you show off 13 billion years of cosmic growth? One way that astronomers can figure that out is through visualizations — such as this one from the Harvard-Smithsonian Center for Astrophysics, called Illustris.

Billed as the most detailed computer simulation ever of the universe (done on a fast supercomputer), you can slowly see how galaxies come alight and the structure of the universe grows. While the pictures are pretty to look at, the Kavli Foundation also argues this is good for science.

In a recent roundtable discussion, the foundation polled experts to talk about the simulation (and in particular how the gas evolves), and how watching these interaction play out before their eyes helps them come to new understandings. But like any dataset, part of the understanding comes from knowing what to focus on and why.

“I think we should look at visualization like mapmakers look at map making. A good mapmaker will be deliberate in what gets included in the map, but also in what gets left out,” said Stuart Levy, a research programmer at the National Center for Supercomputing Applications’ advanced visualization lab, in a statement.

“Visualizers think about their audience … and the specific story they want to tell. And so even with the same audience in mind, you might set up the visualization very differently to tell different stories. For example, for one story you might want to show only what it’s possible for the human eye to see, and in others you might want to show the presence of something that wouldn’t be visible in any sort of radiation at all. That can help to get a point across.”

You can read the whole discussion at this webpage.

Parallel Universes and the Many-Worlds Theory

Credit: Glenn Loos-Austin

Are you unique? In your perception of the world, the answer is simple: you are different than every other person on this planet. But is our universe unique? The concept of multiple realities — or parallel universes — complicates this answer and challenges what we know about the world and ourselves. One model of potential multiple universes called the Many-Worlds Theory might sound so bizarre and unrealistic that it should be in science fiction movies and not in real life. However, there is no experiment that can irrefutably discredit its validity.

The origin of the parallel universe conjecture is closely connected with introduction of the idea of quantum mechanics in the early 1900s. Quantum mechanics, a branch of physics that studies the infinitesimal world, predicts the behavior of nanoscopic objects. Physicists had difficulties fitting a mathematical model to the behavior of quantum matter because some matter exhibited signs of both particle-like and wave-like movements. For example, the photon, a tiny bundle of light, can travel vertically up and down while moving horizontally forward or backward.

Such behavior starkly contrasts with that of objects visible to the naked eye; everything we see moves like either a wave or a particle. This theory of matter duality has been called the Heisenberg Uncertainty Principle (HUP), which states that the act of observation disturbs quantities like momentum and position.

In relation to quantum mechanics, this observer effect can impact the form – particle or wave – of quantum objects during measurements. Future quantum theories, like Niels Bohr’s Copenhagen interpretation, use HUP to state that an observed object does not retain its dual nature and can only behave in one state.

Multiverse Theory
Artist concept of the multiverse. Credit: Florida State University

In 1954, a young student at Princeton University named Hugh Everett proposed a radical supposition that differed from the popular models of quantum mechanics. Everett did not believe that observation causes quantum matter to stop behaving in multiple forms.

Instead, he argued that observation of quantum matter creates a split in the universe. In other words, the universe makes copies of itself to account for all the possibilities and these duplicates will proceed independently. Every time a photon is measured, for instance, a scientist in one universe will analyze it in wave form and the same scientist in another universe will analyze it in particle form. Each of these universes offers a unique and independent reality that coexists with other parallel universes.

If Everett’s Many-Worlds Theory (MWT) is true, it holds many ramifications that completely transform our perceptions on life. Any action that has more than one possible result produces a split in the universe. Thus, there are an infinite number of parallel universes and infinite copies of each person.

These copies have identical facial and body features, but do not have identical personalities (one may be aggressive and another may be passive) because each one experiences a separate outcome. The infinite number of alternate realities also suggests that nobody can achieve unique accomplishments. Every person – or some version of that person in a parallel universe – has done or will do everything.

Moreover, the MWT implies that everybody is immortal. Old age will no longer be a surefire killer, as some alternate realities could be so scientifically and technologically advanced that they have developed an anti-aging medicine. If you do die in one world, another version of you in another world will survive.

The most troubling implication of parallel universes is that your perception of the world is never real. Our “reality” at an exact moment in one parallel universe will be completely unlike that of another world; it is only a tiny figment of an infinite and absolute truth. You might believe you are reading this article at this instance, but there are many copies of you that are not reading. In fact, you are even the author of this article in some distant reality. Thus, do winning prizes and making decisions matter if we might lose those awards and make different choices? Is living important if we might actually be dead somewhere else?

Some scientists, like Austrian mathematician Hans Moravec, have tried to debunk the possibility of parallel universes. Moravec developed a famous experiment called quantum suicide in 1987 that connects a person to a fatal weapon and a machine that determines the spin value, or angular momentum, of protons. Every 10 seconds, the spin value, or quark, of a new proton is recorded.

Based on this measurement, the machine will cause the weapon to kill or spare the person with a 50 percent chance for each scenario. If the Many-World’s Theory is not true, then the experimenter’s survival probability decreases after every quark measurement until it essentially becomes zero (a fraction raised to a large exponent is a very small value). On the other hand, MWT argues that the experimenter always has a 100% chance of living in some parallel universe and he/she has encountered quantum immortality.

When the quark measurement is processed, there are two possibilities: the weapon can either fire or not fire. At this moment, MWT claims that the universe splits into two different universes to account for the two endings. The weapon will discharge in one reality, but not discharge in the other. For moral reasons, scientists cannot use Moravec’s experiment to disprove or corroborate the existence of parallel worlds, as the test subjects may only be dead in that particular reality and still alive in another parallel universe. In any case, the peculiar Many-World’s Theory and its startling implications challenges everything we know about the world.

Sources: Scientific American

Has the Cosmology Standard Model become a Rube Goldberg Device?

Artists illustration of the expansion of the Universe (Credit: NASA, Goddard Space Flight Center)

This week at the Royal Astronomical Society’s National Astronomy Meeting in the UK, physicists are challenging the evidence for the recent BICEP2 results regarding the inflation period of the Universe, announced just 90 days ago. New research is laying doubt upon the inclusion of inflation theory in the Standard Cosmological Model for understanding the forces of nature, the nature of elementary particles and the present state of the known Universe.

Back on March 17, 2014, it seemed the World was offered a glimpse of an ultimate order from eons ago … actually from the beginning of time. BICEP2, the single purpose machine at the South Pole delivered an image that after analysis, and subtraction of estimated background signal from the Milky Way, lead its researchers to conclude that they had found the earliest remnant from the birth of the Universe, a signature in ancient light that supported the theory of Inflation.

 BICEP2 Telescope at twilight at the South Pole, Antartica (Credit: Steffen Richter, Harvard University)
BICEP2 Telescope at twilight at the South Pole, Antarctica (Credit: Steffen Richter, Harvard University)

Thirty years ago, the Inflation theory was conceived by physicists Alan Guth and Andei Linde. Guth, Linde and others realized that a sudden expansion of the Universe at only 1/1000000000000000000000000000000000th of a second after the Big Bang could solve some puzzling mysteries of the Cosmos. Inflation could explain the uniformity of the cosmic background radiation. While images such as from the COBE satellite show a blotchy distribution of radiation, in actuality, these images accentuate extremely small variations in the background radiation, remnants from the Big Bang, variations on the order of 1/100,000th of the background level.

Note that the time of the Universe’s proposed Inflationary period immediately after the Big Bang would today permit light to travel only 1/1000000000000000th of the diameter of the Hydrogen atom. The Universe during this first moment of expansion was encapsulated in a volume far smaller than the a single atom.

Emotions ran very high when the BICEP2 team announced their findings on March 17 of this year. The inflation event that the background radiation data supported is described as a supercooling of the Cosmos however, there were physicists that simply remained cool and remained contrarians to the theory. Noted British Physicist Sir Roger Primrose was one who remained underwhelmed and stated that the incredible circular polarization of light that remained in the processed data from BICEP2 could be explained by the interaction of dust, light and magnetic fields in our own neighborhood, the Milky Way.

Illustration of the ESA Planck Telescope in Earth orbit (Credit: ESA)
Illustration of the ESA Planck Telescope in Earth orbit (Credit: ESA)

Now, new observations from another detector, one on the Planck Satellite orbiting the Earth, is revealing that the contribution of background radiation from local sources, the dust in the Milky Way, is appearing to have been under-estimated by the BICEP2 team. All the evidence is not yet laid out but the researchers are now showing reservations. At the same time, it does not dismiss the Inflation Theory. It means that more observations are needed and probably with greater sensitivity.

So why ask the question, are physicists constructing a Rube Goldberg device?

Our present understanding of the Universe stands upon what is called “the Standard Model” of Cosmology. At the Royal Astronomical Society meeting this week, the discussions underfoot could be revealing a Standard Model possibly in a state of collapse or simply needing new gadgets and mechanisms to remain the best theory of everything.

Also this week, new data further supports the discovery of the Higg’s Boson by the Large Hadron Collider in 2012, the elementary particle whose existence explains the mass of fundamental particles in nature and that supports the existence of the Higgs Field vital to robustness of the Standard Model. However, the Higgs related data is also revealing that if the inflationary period of the Universe did take place, then if taken with the Standard Model, one can conclude that the Universe should have collapsed upon itself and our very existence today would not be possible.

A Rube Goldberg Toothpaste dispenser as also the state of the Standard Model (Credit: R.Goldberg)
A Rube Goldberg Toothpaste dispenser as also the state of the Standard Model (Credit: R.Goldberg)

Dr. Brian Green, a researcher in the field of Super String Theory and M-Theory and others such as Dr. Stephen Hawking, are quick to state that the Standard Model is an intermediary step towards a Grand Unified Theory of everything, the Universe. The contortion of the Standard Model, into a sort of Rube Goldberg device can be explained by the undaunting accumulation of more acute and diverse observations at cosmic and quantum scales.

Discussions at the Royal Astronomical Society meeting are laying more doubts upon the inflation theory which just 90 days ago appeared so well supported by BICEP2 – data derived by truly remarkable cutting edge electronics developed by NASA and researchers at the California Institute of Technology. The trials and tribulations of these great theories to explain everything harken back to the period just prior to Einstein’s Miracle Year, 1905. Fragmented theories explaining separately the forces of nature were present but also the accumulation of observational data had reached a flash point.

Today, observations from BICEP2, NASA and ESA great space observatories, sensitive instruments buried miles underground and carefully contrived quantum experiments in laboratories are making the Standard Model more stressed in explaining everything, the same model so well supported by the Higg’s Boson discovery just two years ago. Cosmologists concede that we may never have a complete, proven theory of everything, one that is elegant; however, the challenges upon the Standard Model and inflation will surely embolden younger theorists to double the efforts in other theoretical work.

For further reading:
RAS NAM press release: Should the Higgs Boson Have Caused our Universe To Collapse?
We’ve Discovered Inflation!: Now What?
Cosmologists Cast Doubt on Inflation Evidence
Are the BICEP2 Results Invalid? Probably Not

First Precise Measurement of Antihydrogen

Hydrogen’s electron and proton have oppositely charged antimatter counterparts in the antihydrogen: the positron and antiproton. Image credit: NSF.

The best science — the questions that capture and compel any human being — is enshrouded in mystery. Here’s an example: scientists expect that matter and antimatter were created in equal quantities shortly after the Big Bang. If this had been the case, the two types of particles would have annihilated each other, leaving a Universe permeated by energy.

As our existence attests, that did not happen. In fact, nature seems to have a one-part in 10 billion preference for matter over antimatter. It’s one of the greatest mysteries in modern physics.

But the Large Hadron Collider is working hard, literally pushing matter to the limit, to solve this captivating mystery. This week, CERN created a beam of antihydrogen atoms, allowing scientists to take precise measurements of this elusive antimatter for the first time.

Antiparticles are identical to matter particles except for the sign of their electric charge. So while hydrogen consists of a positively charged proton orbited by a negatively charged electron, antihydrogen consists of a negatively charged antiproton orbited by a positively charged anti-electron, or a positron

While primordial antimatter has never been observed in the Universe, it’s possible to create antihydrogen in a particle accelerator by mixing positrons and low energy antiprotons.

In 2010, the ALPHA team captured and held atoms of antihydrogen for the first time. Now the team has successfully created a beam of antihydrogen particles. In a paper published this week in Nature Communications, the ALPHA team reports the detection of 80 antihydrogen atoms 2.7 meters downstream from their production.

“This is the first time we have been able to study antihydrogen with some precision,” said ALPHA spokesperson Jeffrey Hangst in a press release. “We are optimistic that ALPHA’s trapping technique will yield many such insights in the future.”

One of the key challenges is keeping antihydrogen away from ordinary matter, so that the two don’t annihilate each other. To do so, most experiments use magnetic fields to trap antihydrogen atoms long enough to study them.

However, the strong magnetic fields degrade the spectroscopic properties of the antihydrogen atoms, so the ALPHA team had to develop an innovative set-up to transfer antihydrogen atoms to a region where they could be studied, far from the strong magnetic field.

To measure the charge of antihydrogen, the ALPHA team studied the trajectories of antihydrogen atoms released from the trap in the presence of an electric field. If the antihydrogen atoms had an electric charge, the field would deflect them, whereas neutral atoms would be undeflected.

The result, based on 386 recorded events, gives a value of the antihydrogen electric charge at -1.3 x 10-8. In other words, its charge is compatible with zero to eight decimal places. Although this result comes as no surprise, since hydrogen atoms are electrically neutral, it is the first time that the charge of an antiatom has been measured to such high precision.

In the future, any detectable difference between matter and antimatter could help solve one of the greatest mysteries in modern physics, opening up a window into a new realm of science.

The paper has been published in Nature Communications.