Team Creates Negative Effective Mass In The Lab

Credit: ESA/Hubble, ESO, M. Kornmesser
Researchers at WSU have created a fluid with a negative effective mass for the first time, which could open the door to studying the deeper mysteries of the Universe. Credit: ESA/Hubble, ESO, M. Kornmesse

When it comes to objects and force, Isaac Newton’s Three Laws of Motion are pretty straightforward. Apply force to an object in a specific direction, and the object will move in that direction. And unless there’s something acting against it (like gravity or air pressure) it will keep moving in that direction until something stops it. But when it comes to “negative mass”, the exact opposite is true.

As the name would suggest, the term refers to matter whose mass is opposite that of normal matter. Until a few years ago, negative mass was predominantly a theoretical concept and had only been observed in very specific settings. But according to a recent study by an international team of researchers, they managed to create a fluid with a “negative effective mass” under laboratory conditions for the first time .

To put it in the simplest terms, matter can have a negative mass in the same way that a particle can have a negative charge. When it comes to the Universe that we know and study on a regular basis, one could say that we have encountered only the positive form of mass. In fact, one could say that it is the same situation with matter and antimatter. Theoretical physics tells us both exist, but we only see the one on a regular basis.

. Credit: shock.wsu.edu

As Dr. Michael McNeil Forbes – a Professor at Washington State University, a Fellow at the Institute for Nuclear Theory, and a co-author on the study – explained in a WSU press release:

“That’s what most things that we’re used to do. With negative mass, if you push something, it accelerates toward you. Once you push, it accelerates backwards. It looks like the rubidium hits an invisible wall.”

According to the team’s study, which was recently published in the Physical Review Letters (under the title “Negative-Mass Hydrodynamics in a Spin-Orbit–Coupled Bose-Einstein Condensate“), a negative effective mass can be created by altering the spin-orbit coupling of atoms. Led by Peter Engels – a professor of physics and astronomy at Washington State University – this consisted of using lasers to control the behavior of rubidium atoms.

They began by using a single laser to keep rubidium atoms in a bowl measuring less than 100 microns across. This had the effect of slowing the atoms down and cooling them to just a few degrees above absolute zero, which resulted in the rubidium becoming a Bose-Einstein condensate. Named after Satyendra Nath Bose and Albert Einstein (who predicted how their atoms would behave) these types of condensates behaves like a superfluid.

Velocity-distribution data (3 views) for a gas of rubidium atoms, confirming the discovery of a new phase of matter, the Bose–Einstein condensate. Credit: NIST/JILA/CU-Boulder

Basically, this means that their particles move very slowly and behave like waves, but without losing any energy. A second set of lasers was then applied to move the atoms back and forth, effectively changing the way they spin. Prior to the change in their spins, the superfluid had regular mass and breaking the bowl would result in them pushing out and expanding away from their center of mass.

But after the application of the second laser, the rubidium rushed out and accelerated in the opposite direction – consistent with how a negative mass would. This represented a break with previous laboratory experiments, where researchers were unable to get atoms to behave in a way that was consistent with negative mass. But as Forbes explained, the WSU experiment avoided some of the underlying defects encountered by these experiments:

“What’s a first here is the exquisite control we have over the nature of this negative mass, without any other complications. It provides another environment to study a fundamental phenomenon that is very peculiar.”

And while news of this experiment has been met with fanfare and claims to the effect that the researchers had “rewritten the laws of physics”, it is important to emphasize that this research has created a “negative effective mass” – which is fundamentally different from a negative mass.

Artist’s rendering of an outburst on an ultra-magnetic neutron star, also called a magnetar.
Credit: NASA/Goddard Space Flight Center

As Sabine Hossenfelder, a Research Fellow at the Frankfurt Institute for Advanced Studies, wrote on her website Backreaction in response to the news:

“Physicists use the preamble ‘effective’ to indicate something that is not fundamental but emergent, and the exact definition of such a term is often a matter of convention. The ‘effective radius’ of a galaxy, for example, is not its radius. The ‘effective nuclear charge’ is not the charge of the nucleus. And the ‘effective negative mass’ – you guessed it – is not a negative mass. The effective mass is merely a handy mathematical quantity to describe the condensate’s behavior.”

In other words, the researchers were able to get atoms to behave as a negative mass, rather than creating one. Nevertheless, their experiment demonstrates the level of control researchers now have when conducting quantum experiments, and also serves to clarify how negative mass behaves in other systems. Basically, physicists can use the results of these kinds of experiments to probe the mysteries of the Universe where experimentation is impossible.

These include what goes on inside neutron stars or what transpires beneath the veil of a event horizon. Perhaps they could even shed some light on questions relating to dark energy.

Further Reading: Physical Review Letters, WSU

CERN Declares War On The Standard Model

The LHCb collaboration was launched in 2016 to test explore the events that followed the Big Bang. Credit: CERN

Ever since the discovery of the Higgs Boson in 2012, the Large Hadron Collider has been dedicated to searching for the existence of physics that go beyond the Standard Model. To this end, the Large Hadron Collider beauty experiment (LHCb) was established in 1995, specifically for the purpose of exploring what happened after the Big Bang that allowed matter to survive and create the Universe as we know it.

Since that time, the LHCb has been doing some rather amazing things. This includes discovering five new particles, uncovering evidence of a new manifestation of matter-antimatter asymmetry, and (most recently) discovering unusual results when monitoring beta decay. These findings, which CERN announced in a recent press release, could be an indication of new physics that are not part of the Standard Model.

In this latest study, the LHCb collaboration team noted how the decay of B0 mesons resulted in the production of an excited kaon and a pair of electrons or muons. Muons, for the record, are subatomic particles that are 200 times more massive than electrons, but whose interactions are believed to be the same as those of electrons (as far as the Standard Model is concerned).

The LHCb collaboration team. Credit: lhcb-public.web.cern.ch

This is what is known as “lepton universality”, which not only predicts that electrons and muons behave the same, but should be produced with the same probability – with some constraints arising from their differences in mass. However, in testing the decay of B0 mesons, the team found that the decay process produced muons with less frequency. These results were collected during Run 1 of the LHC, which ran from 2009 to 2013.

The results of these decay tests were presented on Tuesday, April 18th, at a CERN seminar, where members of the LHCb collaboration team shared their latest findings. As they indicated during the course of the seminar, these findings are significant in that they appear to confirm results obtained by the LHCb team during previous decay studies.

This is certainly exciting news, as it hints at the possibility that new physics are being observed. With the confirmation of the Standard Model (made possible with the discovery of the Higgs boson in 2012), investigating theories that go beyond this (i.e. Supersymmetry) has been a major goal of the LHC. And with its upgrades completed in 2015, it has been one of the chief aims of Run 2 (which will last until 2018).

A typical LHCb event fully reconstructed. Particles identified as pions, kaon, etc. are shown in different colours. Credit: LHCb collaboration

Naturally, the LHCb team indicated that further studies will be needed before any conclusions can be drawn. For one, the discrepancy they noted between the creation of muons and electrons carries a low probability value (aka. p-value) of between 2.2. to 2.5 sigma. To put that in perspective, the first detection of the Higgs Boson occurred at a level of 5 sigma.

In addition, these results are inconsistent with previous measurements which indicated that there is indeed symmetry between electrons and muons. As a result, more decay tests will have to be conducted and more data collected before the LHCb collaboration team can say definitively whether this was a sign of new particles, or merely a statistical fluctuation in their data.

The results of this study will be soon released in a LHCb research paper. And for more information, check out the PDF version of the seminar.

Further Reading: CERN, LHCb

Watch Stars Orbit The Milky Way’s Supermassive Black Hole

Stars circle 'round the Milky Way central supermassive black hole. Credit: ESO
The Milky Way’s supermassive black hole, called Sagittarius A* (or Sgr A*), is arrowed in the image made of the innermost galactic center in X-ray light by NASA’s Chandra Observatory. To the left or east of Sgr A* is Sgr A East, a large cloud that may be the remnant of a supernova. Centered on Sgr A* is a spiral shaped group of gas streamers that might be falling onto the hole. Credit: NASA/CXC/MIT/Frederick K. Baganoff et al.

When your ordinary citizen learns there’s a supermassive black hole with a mass of 4 million suns sucking on its teeth in the center of the Milky Way galaxy, they might kindly ask exactly how astronomers know this. A perfectly legitimate question. You can tell them that the laws of physics guarantee their existence or that people have been thinking about black holes since 1783. That year, English clergyman John Michell proposed the idea of “dark stars” so massive and gravitationally powerful they could imprison their own light.

This time-lapse movie in infrared light shows how stars in the central light-year of the Milky Way have moved over a period of 14 years. The yellow mark at the image center represents the location of Sgr A*, site of an unseen supermassive black hole.
Credit: A. Eckart (U. Koeln) & R. Genzel (MPE-Garching), SHARP I, NTT, La Silla Obs., ESO

Michell wasn’t making wild assumptions but taking the idea of gravity to a logical conclusion. Of course, he had no way to prove his assertion. But we do. Astronomers  now routinely find bot stellar mass black holes — remnants of the collapse of gas-guzzling supergiant stars — and the supermassive variety in the cores of galaxies that result from multiple black hole mergers over grand intervals of time.

Some of the galactic variety contain hundreds of thousands to billions of solar masses, all of it so to speak “flushed down the toilet” and unavailable to fashion new planets and stars. Famed physicist Stephen Hawking has shown that black holes evaporate over time, returning their energy to the knowable universe from whence they came, though no evidence of the process has yet been found.

On September 14, 2013, astronomers caught the largest X-ray flare ever detected from Sgr A*, the supermassive black hole at the center of the Milky Way, using NASA’s Chandra X-ray Observatory.  This event was 400 times brighter than the usual X-ray output from the source and was possibly caused when Sgr A*’s strong gravity tore apart an asteroid in its neighborhood, heating the debris to X-ray-emitting temperatures before slurping down the remains.The inset shows the giant flare. Credit: NASA

So how do we really know a massive, dark object broods at the center of our sparkling Milky Way? Astronomers use radio, X-ray and infrared telescopes to peer into its starry heart and see gas clouds and stars whirling about the center at high rates of speed. Based on those speeds they can calculate the mass of what’s doing the pulling.

The Hubble Space Telescope took this photo of the  5000-light-year-long jet of radiation ejected from the active galaxy M87’s supermassive black hole, which is aboutt 1,000 times more massive than the Milky Way’s black hole. Although black holes are dark, matter whirling into their maws at high speed is heated to high temperature, creating a bright disk of material and jets of radiation. Credit: NASA/The Hubble Heritage Team (STScI/AURA)

In the case of the galaxy M87 located 53.5 million light years away in the Virgo Cluster, those speeds tell us that something with a mass of 3.6 billion suns is concentrated in a space smaller than our Solar System. Oh, and it emits no light! Nothing fits the evidence better than a black hole because nothing that massive can exist in so small a space without collapsing in upon itself to form a black hole. It’s just physics, something that Mr. Scott on Star Trek regularly reminded a panicky Captain Kirk.

So it is with the Milky Way, only our black hole amounts to a piddling 4 million-solar-mass light thief confined within a spherical volume of space some 27 million miles in diameter or just shy of Mercury’s perihelion distance from the Sun. This monster hole resides at the location of Sagittarius A* (pronounced A- star), a bright, compact radio source at galactic center about 26,000 light years away.


Video showing a 14-year-long time lapse of stars orbiting Sgr A*

The time-lapse movie, compiled over 14 years, shows the orbits of several dozen stars within the light year of space centered on Sgr A*. We can clearly see the star moving under the influence of a massive unseen body — the putative supermassive black hole. No observations of Sgr A* in visible light are possible because of multiple veils of interstellar dust that lie across our line of sight. They quench its light to the tune of 25 magnitudes.


Merging black holes (the process look oddly biological!). Credit: SXS

How do these things grow so big in the first place? There are a couple of ideas, but astronomers don’t honestly know for sure. Massive gas clouds around early in the galaxy’s history could have collapsed to form multiple supergiants that evolved into black holes which later then coalesced into one big hole. Or collisions among stars in massive, compact star clusters could have built up stellar giants that evolved into black holes. Later, the clusters sank to the center of the galaxy and merged into a single supermassive black hole.

Whichever you chose, merging of smaller holes may explain its origin.

On a clear spring morning before dawn, you can step out to face the constellation Sagittarius low in the southern sky. When you do, you’re also facing in the direction of our galaxy’s supermassive black hole. Although you cannot see it, does it not still exert a certain tug on your imagination?

Large Hadron Collider Discovers 5 New Gluelike Particles

A typical LHCb event fully reconstructed. Particles identified as pions, kaon, etc. are shown in different colours. Credit: LHCb collaboration

Since it began its second operational run in 2015, the Large Hadron Collider has been doing some pretty interesting things. For example, starting in 2016, researchers at CERN began using the collide to conduct the Large Hadron Collider beauty experiment (LHCb). This is investigation seeks to determine what it is that took place after the Big Bang so that matter was able to survive and create the Universe that we know today.

In the past few months, the experiment has yielded some impressive results, such as the measurement of a very rare form of particle decay and evidence of a new manifestation of matter-antimatter asymmetry. And most recently, the researchers behind LHCb have announced the discovery of a new system of five particles, all of which were observed in a single analysis.

According to the research paper, which appeared in arXiv on March 14th, 2017, the particles that were detected were excited states of what is known as a “Omega-c-zero” baryon. Like other particles of its kind, the Omega-c-zero is made up of three quarks – two of which are “strange” while the third is a “charm” quark. The existence of this baryon was confirmed in 1994. Since then, researchers at CERN have sought to determine if there were heavier versions.

The LHCb collaboration team. Credit: lhcb-public.web.cern.ch

And now, thanks to the LHCb experiment, it appears that they have found them. The key was to examine the trajectories and the energy left in the detector by particles in their final configuration and trace them back to their original state. Basically, Omega-c-zero particles decay via the strong force into another type of baryon (Xi-c-plus) and then via the weak force into protons, kaons, and pions.

From this, the researchers were able to determine that what they were seeing were Omega-c-zero particles at different energy states (i.e. of different sizes and masses). Expressed in megaelectronvolts (MeV), these particles have masses of 3000, 3050, 3066, 3090 and 3119 MeV, respectively. This discovery was rather unique, since it involved the detection of five higher energy states of a particle at the same time.

This was made possible thanks to the specialized capabilities of the LHCb detector and the large dataset that was accumulated from the first and second runs of the LHC – which ran from 2009 to 2013, and since 2015, respectively. Armed with the right equipment and experience, the researchers were able to identify the particles with an overwhelming level of certainty, ruling out the possibility that it was a statistical fluke in the data.

The discovery is also expected to shed light on some of the deeper mysteries of subatomic particles, like how the three constituent quarks are bound inside a baryon by the “strong force” – i.e. the fundamental force that is responsible for holding the insides of atoms together. Another mystery that this could help resolve in the correlation between different quark states.

The Large Hadron Collider is the world’s largest and most powerful particle accelerator Credit: CERN

As Dr Greig Cowan – a researcher from the University of Edinburgh who works on the LHCb experiment at Cern’s LHC – explained in an interview with the BBC:

“This is a striking discovery that will shed light on how quarks bind together. It may have implications not only to better understand protons and neutrons, but also more exotic multi-quark states, such as pentaquarks and tetraquarks.

The next step will be to determine the quantum numbers of these new particles (the numbers used to identify the properties of a specific particle) as well as determining their theoretical significance. Since it came online, the LHC has been helping to confirm the Standard Model of particle physics, as well as reaching beyond it to explore the greater unknowns of how the Universe came to be, and how the fundamental forces that govern it fit together.

In the end, the discovery of these five new particles could be a crucial step along the road towards a Theory of Everything (ToE), or just another piece in the very big puzzle that is our existence. Stay tuned to see which!

Further Reading: CERN, LHCb, arXiv

The Universe Has A Lithium Problem

This illustration shows the evolution of the Universe, from the Big Bang on the left, to modern times on the right. Image: NASA

Over the past decades, scientists have wrestled with a problem involving the Big Bang Theory. The Big Bang Theory suggests that there should be three times as much lithium as we can observe. Why is there such a discrepancy between prediction and observation?

To get into that problem, let’s back up a bit.

The Big Bang Theory (BBT) is well-supported by multiple lines of evidence and theory. It’s widely accepted as the explanation for how the Universe started. Three key pieces of evidence support the BBT:

But the BBT still has some niggling questions.

The missing lithium problem is centred around the earliest stages of the Universe: from about 10 seconds to 20 minutes after the Big Bang. The Universe was super hot and it was expanding rapidly. This was the beginning of what’s called the Photon Epoch.

At that time, atomic nuclei formed through nucleosynthesis. But the extreme heat that dominated the Universe prevented the nuclei from combining with electrons to form atoms. The Universe was a plasma of nuclei, electrons, and photons.

Only the lightest nuclei were formed during this time, including most of the helium in the Universe, and small amounts of other light nuclides, like deuterium and our friend lithium. For the most part, heavier elements weren’t formed until stars appeared, and took on the role of nucleosynthesis.

The problem is that our understanding of the Big Bang tells us that there should be three times as much lithium as there is. The BBT gets it right when it comes to other primordial nuclei. Our observations of primordial helium and deuterium match the BBT’s predictions. So far, scientists haven’t been able to resolve this inconsistency.

But a new paper from researchers in China may have solved the puzzle.

One assumption in Big Bang nucleosynthesis is that all of the nuclei are in thermodynamic equilibrium, and that their velocities conform to what’s called the classical Maxwell-Boltzmann distribution. But the Maxwell-Boltzmann describes what happens in what is called an ideal gas. Real gases can behave differently, and this is what the researchers propose: that nuclei in the plasma of the early photon period of the Universe behaved slightly differently than thought.

This graphics shows the distribution of early primordial light elements in the Universe by time and temperature. Temperature along the top, time along the bottom, and abundance on the side. Image: Hou et al. 2017

The authors applied what is known as non-extensive statistics to solve the problem. In the graph above, the dotted lines of the author’s model predict a lower abundance of the beryllium isotope. This is key, since beryllium decays into lithium. Also key is that the resulting amount of lithium, and of the other lighter nuclei, now all conform to the amounts predicted by the Maxwell-Boltzmann distribution. It’s a eureka moment for cosmology aficionados.

The decay chains of primordial light nuclei in the early days of the Universe. Notice the thin red arrows between Beryllium and Lithium at 10-13, the earliest time shown on this chart. Image: Chou et. al.

What this all means is scientists can now accurately predict the abundance in the primordial universe of the three primordial nuclei: helium, deuterium, and lithium. Without any discrepancy, and without any missing lithium.

This is how science grinds away at problems, and if the authors of the paper are correct, then it further validates the Big Bang Theory, and brings us one step closer to understanding how our Universe was formed.

Eureka!

Harvard Physicist Creates Metallic Hydrogen Using Diamond Vise

Using two diamonds, scientists squeezed hydrogen to pressures above those in Earth's core. Credit: Sang-Heon Shim, Arizona State University

For some time, scientists have been fascinated by the concept of metallic hydrogen. Such an element is believed to exist naturally when hydrogen is placed under extreme pressures (like in the interior of gas giants like Jupiter). But as a synthetic material, it would have endless applications, since it is believed to have superconducting properties at room temperature and the ability to retain its solidity once it has been brought back to normal pressure.

For this reason, condensed matter physicists have been attempting to create metallic hydrogen for decades. And according to a recent study published in Science Magazine, a pair of physicists from the Lyman Laboratory of Physics at Harvard University claim to have done this very thing. If true, this accomplishment could usher in a new age of super materials and high-pressure physics.

The existence of metallic hydrogen was first predicted in 1935 Princeton physicists Eugene Wigner and Hillard Bell Huntington. For years, Isaac Silvera (the Thomas D. Cabot Professor at Harvard University) and Ranga Dias, a postdoctorate fellow, have sought to create it. They claim to have succeeded, using a process which they described in their recently-published study, “Observation of the Wigner-Huntington transition to metallic hydrogen“.

This cut-away illustrates a model of the interior of Jupiter, with a rocky core overlaid by a deep layer of liquid metallic hydrogen. Credit: Kelvinsong/Wikimedia Commons

Such a feat, which is tantamount to creating the heart of Jupiter between two diamonds, is unparalleled in the history of science. As Silvera described the accomplishment in a recent Harvard press release:

“This is the Holy Grail of high-pressure physics. It’s the first-ever sample of metallic hydrogen on Earth, so when you’re looking at it, you’re looking at something that’s never existed before.”

In the past, scientists have succeeded in creating liquid hydrogen at high temperature conditions by ramping up the pressures it was exposed to (as opposed to cryogenically cooling it). But metallic hydrogen has continued to elude experimental scientists, despite repeated (and unproven) claims in the past to have achieved synthesis. The reason for this is because such experiments are extremely temperamental.

For instance, the diamond anvil method (which Silvera and Dias used a variation of) consists of holding a sample of hydrogen in place with a thin metal gasket, then compressing it between two diamond-tipped vices. This puts the sample under extreme pressure, and a laser sensor is used to monitor for any changes. In the past, this has proved problematic since the pressure can cause the hydrogen to fill imperfections in the diamonds and crack them.

While protective coatings can ensure the diamonds don’t crack, the additional materials makes it harder to get accurate readings from laser measurements. What’s more, scientists attempting to experiment with hydrogen have found that pressures of ~400 gigapascals (GPa) or more need to be involved – which turns the hydrogen samples black, thus preventing the laser light from being able to penetrate it.

Microscopic images of the stages in the creation of metallic hydrogen: Transparent molecular hydrogen (left) at about 200 GPa, which is converted into black molecular hydrogen, and finally reflective atomic metallic hydrogen at 495 GPa. Credit: Isaac Silvera

For the sake of their experiment, Professors Ranga Dias and Isaac Silvera took a different approach. For starters, they used two small pieces of polished synthetic diamond rather than natural ones. They then used a reactive ion etching process to shave their surfaces, then coated them with a thin layer of alumina to prevent hydrogen from diffusing into the crystal structure.

They also simplified the experiment by removing the need for high-intensity laser monitoring, relying on Raman spectroscopy instead. When they reached a pressure of 495 GPa (greater than that at the center of the Earth), their sample reportedly became metallic and changed from black to shiny red. This was revealed by measuring the spectrum of the sample, which showed that it had become highly reflective (which is expected for a sample of metal).

As Silvera explained, these experimental results (if verified) could lead to all kinds of possibilities:

“One prediction that’s very important is metallic hydrogen is predicted to be meta-stable. That means if you take the pressure off, it will stay metallic, similar to the way diamonds form from graphite under intense heat and pressure, but remain diamonds when that pressure and heat are removed. As much as 15 percent of energy is lost to dissipation during transmission, so if you could make wires from this material and use them in the electrical grid, it could change that story.”

Superconducting links developed to carry currents of up to 20,000 amperes are being tested at CERN. Credit: CERN

In short, metallic hydrogen could speed the revolution in electronics already underway, thanks to the discovery of materials like graphene. Since metallic hydrogen is also believed to be a superconductor at room temperature, its synthetic production would have immense implications for high-energy research and physics – such as that being conducted by CERN.

Beyond that, it would also enable research into the interior’s of gas giants. For some time, scientists have suspected that a layer of metallic hydrogen may surround the cores of gas giants like Jupiter and Saturn. Naturally, the temperature and pressure conditions in the interiors of these planets make direct study impossible. But by being able to produce metallic hydrogen synthetically, scientists could conduct experiment to see how it behaves.

Naturally, the news of this experiment and its results is being met with skepticism. For instance, critics wonder if the pressure reading of 495 GPa was in fact accurate, since Silvera and Dias only obtained that as a final measurement and were forced to rely on estimates prior to that. Second, there are those who question if the reddish speck that resulted is in fact hydrogen, and some material that came from the gasket or diamond coating during the process.

However, Silvera and Dias are confident in their results and believe they can be replicated (which would go far to silence doubts about their results). For one, they emphasize that a comparative measurement of the reflective properties of the hydrogen dot and the surrounding gasket suggest that the hydrogen is pure. They also claim their pressure measurements were properly calibrated and verified.

In the future, they intend to obtain additional spectrographic readings from the sample to confirm that it is in fact metallic. Once that is done, they plan to test the sample to see if it is truly metastable, which will consist of them opening the vise and seeing if it remains in a solid state. Given the implications of success, there are many who would like to see their experiment borne out!

Be sure to check out this video produced by Harvard University that talks about the experiment:

Further Reading: Science Magazine, Harvard Gazette

What is the Alcubierre “Warp” Drive?

No immediate plausibility issues with this picture, since the speedometer says 0.8c. Getting it past 1.0c is where it gets tricky.

It’s always a welcome thing to learn that ideas that are commonplace in science fiction have a basis in science fact. Cryogenic freezers, laser guns, robots, silicate implants… and let’s not forget the warp drive! Believe it or not, this concept – alternately known as FTL (Faster-Than-Light) travel, Hyperspace, Lightspeed, etc. – actually has one foot in the world of real science.

In physics, it is what is known as the Alcubierre Warp Drive. On paper, it is a highly speculative, but possibly valid, solution of the Einstein field equations, specifically how space, time and energy interact. In this particular mathematical model of spacetime, there are features that are apparently reminiscent of the fictional “warp drive” or “hyperspace” from notable science fiction franchises, hence the association.

Background:

Since Einstein first proposed the Special Theory of Relativity in 1905, scientists have been operating under the restrictions imposed by a relativistic universe. One of these restrictions is the belief that the speed of light is unbreakable and hence, that there will never be such a thing as FTL space travel or exploration.

Visualization of a warp field, according to the Alcubierre Drive. Credit: AllenMcC

Even though subsequent generations of scientists and engineers managed to break the sound barrier and defeat the pull of the Earth’s gravity, the speed of light appeared to be one barrier that was destined to hold. But then, in 1994, a Mexican physicist by the name of Miguel Alcubierre came along with proposed method for stretching the fabric of space-time in way which would, in theory, allow FTL travel to take pace.

Concept:

To put it simply, this method of space travel involves stretching the fabric of space-time in a wave which would (in theory) cause the space ahead of an object to contract while the space behind it would expand. An object inside this wave (i.e. a spaceship) would then be able to ride this region, known as a “warp bubble” of flat space.

This is what is known as the “Alcubierre Metric”. Interpreted in the context of General Relativity, the metric allows a warp bubble to appear in a previously flat region of spacetime and move away, effectively at speeds that exceed the speed of light. The interior of the bubble is the inertial reference frame for any object inhabiting it.

Since the ship is not moving within this bubble, but is being carried along as the region itself moves, conventional relativistic effects such as time dilation would not apply. Hence, the rules of space-time and the laws of relativity would not be violated in the conventional sense.

Artist’s concept of a spacecraft using an Alcubierre Warp Drive. Credit: NASA

One of the reasons for this is because this method would not rely on moving faster than light in the local sense, since a light beam within this bubble would still always move faster than the ship. It is only “faster than light” in the sense that the ship could reach its destination faster than a beam of light that was traveling outside the warp bubble.

Difficulties:

However, there is are few problems with this theory. For one, there are no known methods to create such a warp bubble in a region of space that would not already contain one. Second, assuming there was a way to create such a bubble, there is not yet any known way of leaving once inside it. As a result, the Alcubierre drive (or metric) remains in the category of theory at this time.

Mathematically, it can be represented by the following equation: ds2= – (a2 – BiBi) dt2 + 2Bi dxi dt + gijdxi dxj, where a is the lapse function that gives the interval of proper time between nearby hypersurfaces, Bi is the shift vector that relates the spatial coordinate systems on different hypersurfaces and gij is a positive definite metric on each of the hypersurfaces.

Attempts at Development:

In 1996, NASA founded a research project known as the Breakthrough Propulsion Physics Project (BPP) to study various spacecraft proposals and technologies. In 2002, the project’s funding was discontinued, which prompted the founder – Marc G. Millis – and several members to create the Tau Zero Foundation. Named after the famous novel of the same name by Poul Anderson, this organization is dedicated to researching interstellar travel.

In 2012, NASA’s Advanced Propulsion Physics Laboratory (aka. Eagleworks) announced that they had began conducting experiments to see if a “warp drive” was in fact possible. This included developing an interferometer to detect the spatial distortions produced by the expanding and contracting space-time of the Alcubierre metric.

The team lead – Dr. Harold Sonny White – described their work in a NASA paper titled Warp Field Mechanics 101. He also explained their work in NASA’s 2012 Roundup publication:

“We’ve initiated an interferometer test bed in this lab, where we’re going to go through and try and generate a microscopic instance of a little warp bubble. And although this is just a microscopic instance of the phenomena, we’re perturbing space time, one part in 10 million, a very tiny amount… The math would allow you to go to Alpha Centauri in two weeks as measured by clocks here on Earth. So somebody’s clock onboard the spacecraft has the same rate of time as somebody in mission control here in Houston might have. There are no tidal forces, no undue issues, and the proper acceleration is zero. When you turn the field on, everybody doesn’t go slamming against the bulkhead, (which) would be a very short and sad trip.”

In 2013, Dr. White and members of Eagleworks published the results of their 19.6-second warp field test under vacuum conditions. These results, which were deemed to be inconclusive, were presented at the 2013 Icarus Interstellar Starship Congress held in Dallas, Texas.

When it comes to the future of space exploration, some very tough questions seem unavoidable. And questions like “how long will it take us to get the nearest star?” seem rather troubling when we don’t make allowances for some kind of hypervelocity or faster-than-light transit method. How can we expect to become an interstellar species when all available methods with either take centuries (or longer), or will involve sending a nanocraft instead?

At present, such a thing just doesn’t seem to be entirely within the realm of possibility. And attempts to prove otherwise remain unsuccessful or inconclusive. But as history has taught us, what is considered to be impossible changes over time. Someday, who knows what we might be able to accomplish? But until then, we’ll just have to be patient and wait on future research.

We have written many articles about the Alcubierre “Warp” Drive for Universe Today. Here’s Warp Drives Probably Impossible After All, Warp Drives and Cloaking Devices not just Science Fiction Anymore, Warp Drive May Come with a Killer Downside, Astronomy Without a Telescope – Warp Drive on Paper, and Zoom, Zoom, Zoom: Gorgeous Warp Ship Design Delights The Internet.

If you’d like more info on the Alcubierre “Warp” Drive, check out an article from Wikipedia. Also, check out another article about the warp drive spaceship engine.

We’ve also recorded an entire episode of Astronomy Cast all about Light Echoes. Listen here, Episode 215: Light Echoes.

Sources:

Who was Max Planck?

Portrait of Max Planck (c. 1930). Credit: Smithsonian Libraries

Imagine if you will that your name would forever be associated with a groundbreaking scientific theory. Imagine also that your name would even be attached to a series of units, designed to performs measurements for complex equations. Now imagine that you were German who lived through two World Wars, won the Nobel Prize for physics, and outlived many of your children.

If you can do all that, then you might know what it was like to be Max Planck, the German physicist and founder of quantum theory. Much like Galileo, Newton, and Einstein, Max Planck is regarded as one of the most influential and groundbreaking scientists of his time, a man whose discoveries helped to revolutionized the field of physics. Ironic, considering that when he first embarked on his career, he was told there was nothing new to be discovered!

Early Life and Education:

Born in 1858 in Kiel, Germany, Planck was a child of intellectuals, his grandfather and great-grandfather both theology professors and his father a professor of law, and his uncle a judge. In 1867, his family moved to Munich, where Planck enrolled in the Maximilians gymnasium school. From an early age, Planck demonstrated an aptitude for mathematics, astronomy, mechanics, and music.

Illustration of Friedrich Wilhelms University, with the statue of Frederick the Great (ca. 1850). Credit: Wikipedia Commons/A. Carse

He graduated early, at the age of 17, and went on to study theoretical physics at the University of Munich. In 1877, he went on to Friedrich Wilhelms University in Berlin to study with physicists Hermann von Helmholtz. Helmholtz had a profound influence on Planck, who he became close friends with, and eventually Planck decided to adopt thermodynamics as his field of research.

In October 1878, he passed his qualifying exams and defended his dissertation in February of 1879 – titled “On the second law of thermodynamics”. In this work, he made the following statement, from which the modern Second Law of Thermodynamics is believed to be derived: “It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.”

For a time, Planck toiled away in relative anonymity because of his work with entropy (which was considered a dead field). However, he made several important discoveries in this time that would allow him to grow his reputation and gain a following. For instance, his Treatise on Thermodynamics, which was published in 1897, contained the seeds of ideas that would go on to become highly influential – i.e. black body radiation and special states of equilibrium.

With the completion of his thesis, Planck became an unpaid private lecturer at the Freidrich Wilhelms University in Munich and joined the local Physical Society. Although the academic community did not pay much attention to him, he continued his work on heat theory and came to independently discover the same theory of thermodynamics and entropy as Josiah Willard Gibbs – the American physicist who is credited with the discovery.

Professors Michael Bonitz and Frank Hohmann, holding a facsimile of Planck’s Nobel prize certificate, which was given to the University of Kiel in 2013. Credit and Copyright: CAU/Schimmelpfennig

In 1885, the University of Kiel appointed Planck as an associate professor of theoretical physics, where he continued his studies in physical chemistry and heat systems. By 1889, he returned to Freidrich Wilhelms University in Berlin, becoming a full professor by 1892. He would remain in Berlin until his retired in January 1926, when he was succeeded by Erwin Schrodinger.

Black Body Radiation:

It was in 1894, when he was under a commission from the electric companies to develop better light bulbs, that Planck began working on the problem of black-body radiation. Physicists were already struggling to explain how the intensity of the electromagnetic radiation emitted by a perfect absorber (i.e. a black body) depended on the bodies temperature and the frequency of the radiation (i.e., the color of the light).

In time, he resolved this problem by suggesting that electromagnetic energy did not flow in a constant form but rather in discreet packets, i.e. quanta. This came to be known as the Planck postulate, which can be stated mathematically as E = hv – where E is energy, v is the frequency, and h is the Planck constant. This theory, which was not consistent with classical Newtonian mechanics, helped to trigger a revolution in science.

A deeply conservative scientists who was suspicious of the implications his theory raised, Planck indicated that he only came by his discovery reluctantly and hoped they would be proven wrong. However, the discovery of Planck’s constant would prove to have a revolutionary impact, causing scientists to break with classical physics, and leading to the creation of Planck units (length, time, mass, etc.).

From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in 1931. Credit: Wikipedia Commons
From left to right: W. Nernst, A. Einstein, M. Planck, R.A. Millikan and von Laue at a dinner given by von Laue in Berlin, 1931. Credit: Wikipedia Commons

Quantum Mechanics:

By the turn of the century another influential scientist by the name of Albert Einstein made several discoveries that would prove Planck’s quantum theory to be correct. The first was his theory of photons (as part of his Special Theory of Relativity) which contradicted classical physics and the theory of electrodynamics that held that light was a wave that needed a medium to propagate.

The second was Einstein’s study of the anomalous behavior of specific bodies when heated at low temperatures, another example of a phenomenon which defied classical physics. Though Planck was one of the first to recognize the significance of Einstein’s special relativity, he initially rejected the idea that light could made up of discreet quanta of matter (in this case, photons).

However, in 1911, Planck and Walther Nernst (a colleague of Planck’s) organized a conference in Brussels known as the First Solvav Conference, the subject of which was the theory of radiation and quanta. Einstein attended, and was able to convince Planck of his theories regarding specific bodies during the course of the proceedings. The two became friends and colleagues; and in 1914, Planck created a professorship for Einstein at the University of Berlin.

During the 1920s, a new theory of quantum mechanics had emerged, which was known as the “Copenhagen interpretation“. This theory, which was largely devised by German physicists Neils Bohr and Werner Heisenberg, stated that quantum mechanics can only predict probabilities; and that in general, physical systems do not have definite properties prior to being measured.

Photograph of the first Solvay Conference in 1911 at the Hotel Metropole in Brussels, Belgium. Credit: International Solvay Institutes/Benjamin Couprie

This was rejected by Planck, however, who felt that wave mechanics would soon render quantum theory unnecessary. He was joined by his colleagues Erwin Schrodinger, Max von Laue, and Einstein – all of whom wanted to save classical mechanics from the “chaos” of quantum theory. However, time would prove that both interpretations were correct (and mathematically equivalent), giving rise to theories of particle-wave duality.

World War I and World War II:

In 1914, Planck joined in the nationalistic fervor that was sweeping Germany. While not an extreme nationalist, he was a signatory of the now-infamous “Manifesto of the Ninety-Three“, a manifesto which endorsed the war and justified Germany’s participation. However, by 1915, Planck revoked parts of the Manifesto, and by 1916, he became an outspoken opponent of Germany’s annexation of other territories.

After the war, Planck was considered to be the German authority on physics, being the dean of Berlin Universit, a member of the Prussian Academy of Sciences and the German Physical Society, and president of the Kaiser Wilhelm Society (KWS, now the Max Planck Society). During the turbulent years of the 1920s, Planck used his position to raise funds for scientific research, which was often in short supply.

The Nazi seizure of power in 1933 resulted in tremendous hardship, some of which Planck personally bore witness to. This included many of his Jewish friends and colleagues being expelled from their positions and humiliated, and a large exodus of Germans scientists and academics.

Entrance of the administrative headquarters of the Max Planck Society in Munich. Credit: Wikipedia Commons/Maximilian Dörrbecker

Planck attempted to persevere in these years and remain out of politics, but was forced to step in to defend colleagues when threatened. In 1936, he resigned his positions as head of the KWS due to his continued support of Jewish colleagues in the Society. In 1938, he resigned as president of the Prussian Academy of Sciences due to the Nazi Party assuming control of it.

Despite these evens and the hardships brought by the war and the Allied bombing campaign, Planck and his family remained in Germany. In 1945, Planck’s son Erwin was arrested due to the attempted assassination of Hitler in the July 20th plot, for which he was executed by the Gestapo. This event caused Planck to descend into a depression from which he did not recover before his death.

Death and Legacy:

Planck died on October 4th, 1947 in Gottingen, Germany at the age of 89. He was survived by his second wife, Marga von Hoesslin, and his youngest son Hermann. Though he had been forced to resign his key positions in his later years, and spent the last few years of his life haunted by the death of his eldest son, Planck left a remarkable legacy in his wake.

In recognition for his fundamental contribution to a new branch of physics he was awarded the Nobel Prize in Physics in 1918. He was also elected to the Foreign Membership of the Royal Society in 1926, being awarded the Society’s Copley Medal in 1928. In 1909, he was invited to become the Ernest Kempton Adams Lecturer in Theoretical Physics at Columbia University in New York City.

The Max Planck Medal, issued by the German Physical Society in recognition of scientific contributions. Credit: dpg-physik.de

He was also greatly respected by his colleagues and contemporaries and distinguished himself by being an integral part of the three scientific organizations that dominated the German sciences- the Prussian Academy of Sciences, the Kaiser Wilhelm Society, and the German Physical Society. The German Physical Society also created the Max Planck Medal, the first of which was awarded into 1929 to both Planck and Einstein.

The Max Planck Society was also created in the city of Gottingen in 1948 to honor his life and his achievements. This society grew in the ensuing decades, eventually absorbing the Kaiser Wilhelm Society and all its institutions. Today, the Society is recognized as being a leader in science and technology research and the foremost research organization in Europe, with 33 Nobel Prizes awarded to its scientists.

In 2009, the European Space Agency (ESA) deployed the Planck spacecraft, a space observatory which mapped the Cosmic Microwave Background (CMB) at microwave and infra-red frequencies. Between 2009 and 2013, it provided the most accurate measurements to date on the average density of ordinary matter and dark matter in the Universe, and helped resolve several questions about the early Universe and cosmic evolution.

Planck shall forever be remembered as one of the most influential scientists of the 20th century. Alongside men like Einstein, Schrodinger, Bohr, and Heisenberg (most of whom were his friends and colleagues), he helped to redefine our notions of physics and the nature of the Universe.

We have written many articles about Max Planck for Universe Today. Here’s What is Planck Time?, Planck’s First Light?, All-Sky Stunner from Planck, What is Schrodinger’s Cat?, What is the Double Slit Experiment?, and here’s a list of stories about the spacecraft that bears his name.

If you’d like more info on Max Planck, check out Max Planck’s biography from Science World and Space and Motion.

We’ve also recorded an entire episode of Astronomy Cast all about Max Planck. Listen here, Episode 218: Max Planck.

Sources:

What is the CERN Particle Accelerator?

Particle Collider
Today, CERN announced that the LHCb experiment had revealed the existence of two new baryon subatomic particles. Credit: CERN/LHC/GridPP

What if it were possible to observe the fundamental building blocks upon which the Universe is based? Not a problem! All you would need is a massive particle accelerator, an underground facility large enough to cross a border between two countries, and the ability to accelerate particles to the point where they annihilate each other – releasing energy and mass which you could then observe with a series of special monitors.

Well, as luck would have it, such a facility already exists, and is known as the CERN Large Hardron Collider (LHC), also known as the CERN Particle Accelerator. Measuring roughly 27 kilometers in circumference and located deep beneath the surface near Geneva, Switzerland, it is the largest particle accelerator in the world. And since CERN flipped the switch, the LHC has shed some serious light on some deeper mysteries of the Universe.

Purpose:

Colliders, by definition, are a type of a particle accelerator that rely on two directed beams of particles. Particles are accelerated in these instruments to very high kinetic energies and then made to collide with each other. The byproducts of these collisions are then analyzed by scientists in order ascertain the structure of the subatomic world and the laws which govern it.

The Large Hadron Collider is the most powerful particle accelerator in the world. Image: CERN
The Large Hadron Collider is the most powerful particle accelerator in the world. Credit: CERN

The purpose of colliders is to simulate the kind of high-energy collisions to produce particle byproducts that would otherwise not exist in nature. What’s more, these sorts of particle byproducts decay after very short period of time, and are are therefor difficult or near-impossible to study under normal conditions.

The term hadron refers to composite particles composed of quarks that are held together by the strong nuclear force, one of the four forces governing particle interaction (the others being weak nuclear force, electromagnetism and gravity). The best-known hadrons are baryons – protons and neutrons – but also include mesons and unstable particles composed of one quark and one antiquark.

Design:

The LHC operates by accelerating two beams of “hadrons” – either protons or lead ions – in opposite directions around its circular apparatus. The hadrons then collide after they’ve achieved very high levels of energy, and the resulting particles are analyzed and studied. It is the largest high-energy accelerator in the world, measuring 27 km (17 mi) in circumference and at a depth of 50 to 175 m (164 to 574 ft).

The tunnel which houses the collider is 3.8-meters (12 ft) wide, and was previously used to house the Large Electron-Positron Collider (which operated between 1989 and 2000). This tunnel contains two adjacent parallel beamlines that intersect at four points, each containing a beam that travels in opposite directions around the ring. The beam is controlled by 1,232 dipole magnets while 392 quadrupole magnets are used to keep the beams focused.

Superconducting quadrupole electromagnets are used to direct the beams to four intersection points, where interactions between accelerated protons will take place. Credit: Wikipedia Commons/gamsiz
Superconducting quadrupole electromagnets are used to direct the beams to four intersection points, where interactions between accelerated protons will take place.Credit: Wikipedia Commons/gamsiz

About 10,000 superconducting magnets are used in total, which are kept at an operational temperature of -271.25 °C (-456.25 °F) – which is just shy of absolute zero – by approximately 96 tonnes of liquid helium-4. This also makes the LHC the largest cryogenic facility in the world.

When conducting proton collisions, the process begins with the linear particle accelerator (LINAC 2). After the LINAC 2 increases the energy of the protons, these particles are then injected into the Proton Synchrotron Booster (PSB), which accelerates them to high speeds.

They are then injected into the Proton Synchrotron (PS), and then onto the Super Proton Synchrtron (SPS), where they are sped up even further before being injected into the main accelerator. Once there, the proton bunches are accumulated and accelerated to their peak energy over a period of 20 minutes. Last, they are circulated for a period of 5 to 24 hours, during which time collisions occur at the four intersection points.

During shorter running periods, heavy-ion collisions (typically lead ions) are included the program. The lead ions are first accelerated by the linear accelerator LINAC 3, and the Low Energy Ion Ring (LEIR) is used as an ion storage and cooler unit. The ions are then further accelerated by the PS and SPS before being injected into LHC ring.

While protons and lead ions are being collided, seven detectors are used to scan for their byproducts. These include the A Toroidal LHC ApparatuS (ATLAS) experiment and the Compact Muon Solenoid (CMS), which are both general purpose detectors designed to see many different types of subatomic particles.

Then there are the more specific A Large Ion Collider Experiment (ALICE) and Large Hadron Collider beauty (LHCb) detectors. Whereas ALICE is a heavy-ion detector that studies strongly-interacting matter at extreme energy densities, the LHCb records the decay of particles and attempts to filter b and anti-b quarks from the products of their decay.

Then there are the three small and highly-specialized detectors – the TOTal Elastic and diffractive cross section Measurement (TOTEM) experiment, which measures total cross section, elastic scattering, and diffractive processes; the Monopole & Exotics Detector (MoEDAL), which searches magnetic monopoles or massive (pseudo-)stable charged particles; and the Large Hadron Collider forward (LHCf) that monitor for astroparticles (aka. cosmic rays).

History of Operation:

CERN, which stands for Conseil Européen pour la Recherche Nucléaire (or European Council for Nuclear Research in English) was established on Sept 29th, 1954, by twelve western European signatory nations. The council’s main purpose was to oversee the creation of a particle physics laboratory in Geneva where nuclear studies would be conducted.

Illustration showing the byproducts of lead ion collisions, as monitored by the ATLAS detector. Credit: CERN
Illustration showing the byproducts of lead ion collisions, as monitored by the ATLAS detector. Credit: CERN

Soon after its creation, the laboratory went beyond this and began conducting high-energy physics research as well. It has also grown to include twenty European member states: France, Switzerland, Germany, Belgium, the Netherlands, Denmark, Norway, Sweden, Finland, Spain, Portugal, Greece, Italy, the UK, Poland, Hungary, the Czech Republic, Slovakia, Bulgaria and Israel.

Construction of the LHC was approved in 1995 and was initially intended to be completed by 2005. However, cost overruns, budget cuts, and various engineering difficulties pushed the completion date to April of 2007. The LHC first went online on September 10th, 2008, but initial testing was delayed for 14 months following an accident that caused extensive damage to many of the collider’s key components (such as the superconducting magnets).

On November 20th, 2009, the LHC was brought back online and its First Run ran from 2010 to 2013. During this run, it collided two opposing particle beams of protons and lead nuclei at energies of 4 teraelectronvolts (4 TeV) and 2.76 TeV per nucleon, respectively. The main purpose of the LHC is to recreate conditions just after the Big Bang when collisions between high-energy particles was taking place.

Major Discoveries:

During its First Run, the LHCs discoveries included a particle thought to be the long sought-after Higgs Boson, which was announced on July 4th, 2012. This particle, which gives other particles mass, is a key part of the Standard Model of physics. Due to its high mass and elusive nature, the existence of this particle was based solely in theory and had never been previously observed.

The discovery of the Higgs Boson and the ongoing operation of the LHC has also allowed researchers to investigate physics beyond the Standard Model. This has included tests concerning supersymmetry theory. The results show that certain types of particle decay are less common than some forms of supersymmetry predict, but could still match the predictions of other versions of supersymmetry theory.

In May of 2011, it was reported that quark–gluon plasma (theoretically, the densest matter besides black holes) had been created in the LHC. On November 19th, 2014, the LHCb experiment announced the discovery of two new heavy subatomic particles, both of which were baryons composed of one bottom, one down, and one strange quark. The LHCb collaboration also observed multiple exotic hadrons during the first run, possibly pentaquarks or tetraquarks.

Since 2015, the LHC has been conducting its Second Run. In that time, it has been dedicated to confirming the detection of the Higgs Boson, and making further investigations into supersymmetry theory and the existence of exotic particles at higher-energy levels.

The ATLAS detector, one of two general-purpose detectors at the Large Hadron Collider (LHC). Credit: CERN
The ATLAS detector, one of two general-purpose detectors at the Large Hadron Collider (LHC). Credit: CERN

In the coming years, the LHC is scheduled for a series of upgrades to ensure that it does not suffer from diminished returns. In 2017-18, the LHC is scheduled to undergo an upgrade that will increase its collision energy to 14 TeV. In addition, after 2022, the ATLAS detector is to receive an upgrade designed to increase the likelihood of it detecting rare processes, known as the High Luminosity LHC.

The collaborative research effort known as the LHC Accelerator Research Program (LARP) is currently conducting research into how to upgrade the LHC further. Foremost among these are increases in the beam current and the modification of the two high-luminosity interaction regions, and the ATLAS and CMS detectors.

Who knows what the LHC will discover between now and the day when they finally turn the power off? With luck, it will shed more light on the deeper mysteries of the Universe, which could include the deep structure of space and time, the intersection of quantum mechanics and general relativity, the relationship between matter and antimatter, and the existence of “Dark Matter”.

We have written many articles about CERN and the LHC for Universe Today. Here’s What is the Higgs Boson?, The Hype Machine Deflates After CERN Data Shows No New Particle, BICEP2 All Over Again? Researchers Place Higgs Boson Discovery in Doubt, Two New Subatomic Particles Found, Is a New Particle about to be Announced?, Physicists Maybe, Just Maybe, Confirm the Possible Discovery of 5th Force of Nature.

If you’d like more info on the Large Hadron Collider, check out the LHC Homepage, and here’s a link to the CERN website.

Astronomy Cast also has some episodes on the subject. Listen here, Episode 69: The Large Hadron Collider and The Search for the Higgs Boson and Episode 392: The Standard Model – Intro.

Sources:

New Theory of Gravity Does Away With Need for Dark Matter

University of Amsterdam


Erik Verlinde explains his new view of gravity

Let’s be honest. Dark matter’s a pain in the butt. Astronomers have gone to great lengths to explain why is must exist and exist in huge quantities, yet it remains hidden. Unknown. Emitting no visible energy yet apparently strong enough to keep galaxies in clusters from busting free like wild horses, it’s everywhere in vast quantities. What is the stuff – axions, WIMPS, gravitinos, Kaluza Klein particles?

Estimated distribution of matter and energy in the universe. Credit: NASA
Estimated distribution of matter and energy in the universe. Credit: NASA

It’s estimated that 27% of all the matter in the universe is invisible, while everything from PB&J sandwiches to quasars accounts for just 4.9%.  But a new theory of gravity proposed by theoretical physicist Erik Verlinde of the University of Amsterdam found out a way to dispense with the pesky stuff.

formation of complex symmetrical and fractal patterns in snowflakes exemplifies emergence in a physical system.
Snowflakes exemplify the concept of emergence with their complex symmetrical and fractal patterns created when much simpler pieces join together. Credit: Bob King

Unlike the traditional view of gravity as a fundamental force of nature, Verlinde sees it as an emergent property of space.  Emergence is a process where nature builds something large using small, simple pieces such that the final creation exhibits properties that the smaller bits don’t. Take a snowflake. The complex symmetry of a snowflake begins when a water droplet freezes onto a tiny dust particle. As the growing flake falls, water vapor freezes onto this original crystal, naturally arranging itself into a hexagonal (six-sided) structure of great beauty. The sensation of temperature is another emergent phenomenon, arising from the motion of molecules and atoms.

So too with gravity, which according to Verlinde, emerges from entropy. We all know about entropy and messy bedrooms, but it’s a bit more subtle than that. Entropy is a measure of disorder in a system or put another way, the number of different microscopic states a system can be in. One of the coolest descriptions of entropy I’ve heard has to do with the heat our bodies radiate. As that energy dissipates in the air, it creates a more disordered state around us while at the same time decreasing our own personal entropy to ensure our survival. If we didn’t get rid of body heat, we would eventually become disorganized (overheat!) and die.

The more massive the object, the more it distorts spacetime. Credit: LIGO/T. Pyle
The more massive the object, the more it distorts space-time, shown here as the green mesh. Earth orbits the Sun by rolling around the dip created by the Sun’s mass in the fabric of space-time. It doesn’t fall into the Sun because it also possesses forward momentum. Credit: LIGO/T. Pyle

Emergent or entropic gravity, as the new theory is called, predicts the exact same deviation in the rotation rates of stars in galaxies currently attributed to dark matter. Gravity emerges in Verlinde’s view from changes in fundamental bits of information stored in the structure of space-time, that four-dimensional continuum revealed by Einstein’s general theory of relativity. In a word, gravity is a consequence of entropy and not a fundamental force.

Space-time, comprised of the three familiar dimensions in addition to time, is flexible. Mass warps the 4-D fabric into hills and valleys that direct the motion of smaller objects nearby. The Sun doesn’t so much “pull” on the Earth as envisaged by Isaac Newton but creates a great pucker in space-time that Earth rolls around in.

In a 2010 article, Verlinde showed how Newton’s law of gravity, which describes everything from how apples fall from trees to little galaxies orbiting big galaxies, derives from these underlying microscopic building blocks.

His latest paper, titled Emergent Gravity and the Dark Universe, delves into dark energy’s contribution to the mix.  The entropy associated with dark energy, a still-unknown form of energy responsible for the accelerating expansion of the universe, turns the geometry of spacetime into an elastic medium.

“We find that the elastic response of this ‘dark energy’ medium takes the form of an extra ‘dark’ gravitational force that appears to be due to ‘dark matter’,” writes Verlinde. “So the observed dark matter phenomena is a remnant, a memory effect, of the emergence of spacetime together with the ordinary matter in it.”

Rotation curve of the typical spiral galaxy M 33 (yellow and blue points with errorbars) and the predicted one from distribution of the visible matter (white line). The discrepancy between the two curves is accounted for by adding a dark matter halo surrounding the galaxy. Credit: Public domain / Wikipedia
This diagram shows rotation curves of stars in M33, a typical spiral galaxy. The vertical scale is speed and the horizontal is distance from the galaxy’s nucleus. Normally, we expect stars to slow down the farther they are from galactic center (bottom curve), but in fact they revolve much faster (top curve). The discrepancy between the two curves is accounted for by adding a dark matter halo surrounding the galaxy. Credit: Public domain / Wikipedia

I’ll be the first one to say how complex Verlinde’s concept is, wrapped in arcane entanglement entropy, tensor fields and the holographic principal, but the basic idea, that gravity is not a fundamental force, makes for a fascinating new way to look at an old face.

Physicists have tried for decades to reconcile gravity with quantum physics with little success. And while Verlinde’s theory should be rightly be taken with a grain of salt, he may offer a way to combine the two disciplines into a single narrative that describes how everything from falling apples to black holes are connected in one coherent theory.