Unifying The Quantum Principle – Flowing Along In Four Dimensions

PASIEKA/SPL

[/caption]

In 1988, John Cardy asked if there was a c-theorem in four dimensions. At the time, he reasonably expected his work on theories of quantum particles and fields to be professionally put to the test… But it never happened. Now – a quarter of a century later – it seems he was right.

“It is shown that, for d even, the one-point function of the trace of the stress tensor on the sphere, Sd, when suitably regularized, defines a c-function, which, at least to one loop order, is decreasing along RG trajectories and is stationary at RG fixed points, where it is proportional to the usual conformal anomaly.” said Cardy. “It is shown that the existence of such a c-function, if it satisfies these properties to all orders, is consistent with the expected behavior of QCD in four dimensions.”

His speculation is the a-theorem… a multitude of avenues in which quantum fields can be energetically excited (a) is always greater at high energies than at low energies. If this theory is correct, then it likely will explain physics beyond the current model and shed light on any possible unknown particles yet to be revealed by the Large Hadron Collider (LHC) at CERN, Europe’s particle physics lab near Geneva, Switzerland.

“I’m pleased if the proof turns out to be correct,” says Cardy, a theoretical physicist at the University of Oxford, UK. “I’m quite amazed the conjecture I made in 1988 stood up.”

According to theorists Zohar Komargodski and Adam Schwimmer of the Weizmann Institute of Science in Rehovot, Israel, the proof of Cardy’s theories was presented July 2011, and is slowly gaining notoriety among the scientific community as other theoretical physicists take note of his work.

“I think it’s quite likely to be right,” says Nathan Seiberg, a theoretical physicist at the Institute of Advanced Study in Princeton, New Jersey.

The field of quantum theory always stands on shaky ground… it seems that no one can be 100% accurate on their guesses of how particles should behave. According to the Nature news release, one example is quantum chromodynamics — the theory of the strong nuclear force that describes the interactions between quarks and gluons. That lack leaves physicists struggling to relate physics at the high-energy, short-distance scale of quarks to the physics at longer-distance, lower-energy scales, such as that of protons and neutrons.

“Although lots of work has gone into relating short- and long-distance scales for particular quantum field theories, there are relatively few general principles that do this for all theories that can exist,” says Robert Myers, a theoretical physicist at the Perimeter Institute in Waterloo, Canada.

However, Cardy’s a-theorem just might be the answer – in four dimensions – the three dimensions of space and the dimension of time. However, in 2008, two physicists found a counter-example of a quantum field theory that didn’t obey the rule. But don’t stop there. Two years later Seiberg and his colleagues re-evaluated the counter-example and discovered errors. These findings led to more studies of Cardy’s work and allowed Schwimmer and Komargodski to state their conjecture. Again, it’s not perfect and some areas need further clarification. But Myers thinks that the proof is correct. “If this is a complete proof then this becomes a very powerful principle,” he says. “If it isn’t, it’s still a general idea that holds most of the time.”

According to Nature, Ken Intriligator, a theoretical physicist at the University of California, San Diego, agrees, adding that whereas mathematicians require proofs to be watertight, physicists tend to be satisfied by proofs that seem mostly right, and intrigued by any avenues to be pursued in more depth. Writing on his blog on November 9, Matt Strassler, a theoretical physicist at Rutgers University in New Brunswick, New Jersey, described the proof as “striking” because the whole argument follows once one elegant technical idea has been established.

With Cardy’s theory more thoroughly tested, chances are it will be applied more universally in the areas of quantum field theories. This may unify physics, including the area of supersymmetry and aid the findings with the LHC. The a-theorem “will be a guiding tool for theorists trying to understand the physics”, predicts Myers.

Pehaps Cardy’s work will even expand into condensed matter physics, an area where quantum field theories are used to elucidate on new states of materials. The only problem is the a-theorem has only had proof in two and four dimensions – where a few areas of condensed matter physics embrace layers containing just three dimensions – two in space and one in time. However, Myers states that they’ll continue to work on a version of the theorem in odd numbers of dimensions. “I’m just hoping it won’t take another 20 years,” he says.

Original Story Source: Nature News Release. For Further Reading: On Renormalization Group Flows in Four Dimensions.

Was a Fifth Giant Planet Expelled from Our Solar System?

Artist’s impression of a fifth giant planet being ejected from the solar system. Image credit: Southwest Research Institute

[/caption]

Earth’s place in the “Goldilocks” zone of our solar system may be the result of the expulsion of a fifth giant planet from our solar system during its first 600 million years, according to a recent journal publication.

“We have all sorts of clues about the early evolution of the solar system,” said author Dr. David Nesvorny of the Southwest Research Institute. “They come from the analysis of the trans-Neptunian population of small bodies known as the Kuiper Belt, and from the lunar cratering record.”

Nesvorny and his team used the clues they had to build computer simulations of the early solar system and test their theories. What resulted was an early solar system model that has quite a different configuration than today, and a jumbling of planets that may have given Earth the “preferred” spot for life to evolve.


Researchers interpret the clues as evidence that the orbits of Jupiter, Saturn, Uranus and Neptune were affected by a dynamical instability when our solar system was only about half a billion years old. This instability is believed to have helped increase the distance between the giant planets, along with scattering smaller bodies. The scattering of small bodies pushed objects both inward, and outward with some objects ending up in the Kuiper Belt and others impacting the terrestrial planets and the Moon. Jupiter is believed to have scattered objects outward as it moved in towards the sun.

One problem with this interpretation is that slow changes to Jupiter’s orbit would most likely add too much momentum to the orbits of the terrestrial planets. The additional momentum would have possibly caused a collision of Earth with Venus or Mars.

“Colleagues suggested a clever way around this problem,” said Nesvorny. “They proposed that Jupiter’s orbit quickly changed when Jupiter scattered off of Uranus or Neptune during the dynamical instability in the outer solar system.”

Basically if Jupiter’s early migration “jumps,” the orbital coupling between the terrestrial planets and Jupiter is weaker, and less harmful to the inner solar system.

Animation showing the evolution of the planetary system from 20 million years before the ejection to 30 million years after. Five initial planets are shown by red circles, small bodies are in green.
After the fifth planet is ejected, the remaining four planets stabilize after a while, and looks like the outer solar system in the end, with giant planets at 5, 10, 20 and 30 astronomical units.
Click image to view animation. Image Credit: Southwest Research Institute

Nesvorny and his team performed thousands of computer simulations that attempted to model the early solar system in an effort to test the “jumping-Jupiter” theory. Nesvorny found that Jupiter did in fact jump due to gravitational interactions from Uranus or Neptune, but when Jupiter jumped, either Uranus or Neptune were expelled from the solar system. “Something was clearly wrong,” he said.

Based on his early results, Nesvorny added a fifth giant planet, similar to Uranus or Neptune to his simulations. Once he ran the reconfigured simulations, everything fell into place. The simulation showed the fifth planet ejected from the solar system by Jupiter, with four giant planets remaining, and the inner, terrestrial planets untouched.

Nesvorny concluded with, “The possibility that the solar system had more than four giant planets initially, and ejected some, appears to be conceivable in view of the recent discovery of a large number of free-floating planets in interstellar space, indicating the planet ejection process could be a common occurrence.”

If you’d like to read Nesvorny’s full paper, you can access it at: http://arxiv.org/pdf/1109.2949v1

Source: Southwest Research Institute Press Release

Honoring Copernicus – Three New Elements Added To The Periodic Table

Periodic Table of the Elements (Credit: NASA)

[/caption]

Today, November 4, 2011, the General Assembly of the International Union of Pure and Applied Physics (IUPAP) is meeting at the Institute of Physics in London, to approve the names of three new elements… one of which will honor the great Copernicus. Their names are: Element 110, darmstadtium (Ds), Element111, roentgenium (Rg) and Element 112. copernicium (Cn).

Are these new elements? Probably not. All the new ones were discovered long ago, but groups like IUPAC elect names to be used in scientific endeavors. Not only does this include the element, but new molecules which belong to it. As a general rule, these “new elements” are given names by their discoverer – which also leads to international debate. The elements can be named after a mythological concept, a mineral, a place or a country, a property or a very known scientist… even an astronomer!

As for element 112, this extremely radioactive synthetic element can only be created in a laboratory. Copernicium was created on February 9, 1996 by the Gesellschaft für Schwerionenforschung, but its original name – ununbium – didn’t get changed until almost two years ago when a German team of scientists provided enough information to prove its existence. When it was time to give it a moniker, the rules were that it had to end in “ium” and it couldn’t be named for a living person. On February 19, 2010, the 537th anniversary of Copernicus’ birth, IUPAC officially accepted the proposed name and symbol.

This “name calling” process comes from the Joint Working Party on the Discovery of Elements, which is a joint body of IUPAP and the International Union of Pure and Applied Chemistry (IUPAC). From there it is given to the General Assembly for approval. Dr. Robert Kirby-Harris, Chief Executive at IOP and Secretary-General of IUPAP, said, “The naming of these elements has been agreed in consultation with physicists around the world and we’re delighted to see them now being introduced to the Periodic Table.”

The General Assembly consists of 60 members from different countries. These delegates are elected from national academies and physical societies around the world. The five day meeting, which started session on Monday, October 31 will end today. The meeting included presentations from leading UK physicists, and the inauguration of IUPAP’s first female President, Professor Cecilia Jarlskog from the Division of Mathematical Physics at Lund University in Sweden.

Original Story Source: Institute of Physics News Release.

Absorption Lines Shed New Light on 90 Year Old Puzzle

Gemini North Observatory, Maunakea Hawaii. Image Credit: Gemini Observatory/AURA

[/caption]

Using the Gemini North Telescope, astronomers studying the central region of the Milky Way have discovered 13 diffuse interstellar bands with the longest wavelengths to date. The team’s discovery could someday solve a 90-year-old mystery about the existence of these bands.

“These diffuse interstellar bands—or DIBs—have never been seen before,” says Donald Figer, director of the Center for Detectors at Rochester Institute of Technology and one of the authors of a study appearing in the journal Nature.

What phenomenon are responsible for these absorption lines, and what impact do they have on our studies of our galaxy?

Figer offers his explanation of absorption lines, stating, “Spectra of stars have absorption lines because gas and dust along the line of sight to the stars absorb some of the light.”

Figer adds, “The most recent ideas are that diffuse interstellar bands are relatively simple carbon bearing molecules, similar to amino acids. Maybe these are amino acid chains in space, which supports the theory that the seeds of life originated in space and rained down on planets.”

“Observations in different Galactic sight lines indicate that the material responsible for these DIBs ‘survives’ under different physical conditions of temperature and density,” adds team member Paco Najarro (Center of Astrobiology, Madrid).

The discovery of low energy absorption lines by Figer and his team helps to determine the nature of diffuse interstellar bands. Figer believes that any future models that predict which wavelengths the particles absorb will have to include the newly discovered lower energies, stating, “We saw the same absorption lines in the spectra of every star. If we look at the exact wavelength of the features, we can figure out the kind of gas and dust between us and the stars that is absorbing the light.”

Spectra of the newly discovered Diffuse Interstellar Bands (DIB's).
Image Credit: Geballe, Najarro, Figer, Schlegelmilch, and de la Fuente.

Since their discovery 90 years ago, diffuse interstellar bands have been a mystery. To date, the known bands that have been identified before the team’s study occur mostly in visible wavelengths. Part of the puzzle is that the observed lines don’t match the predicted lines of simple molecules and can’t be traced to a single source.

“None of the diffuse interstellar bands has been convincingly identified with a specific element or molecule, and indeed their identification, individually and collectively, is one of the greatest challenges in astronomical spectroscopy, recent studies have suggested that DIB carriers are large carbon-containing molecules.” states lead author Thomas Geballe (Gemini Observatory).

One other benefit the newly discovered infrared bands offer is that they can be used to better understand the diffuse interstellar medium, where thick dust and gas normally block observations in visible light. By studying the stronger emissions, scientists may gain a better understanding of their molecular origin. So far, no research teams have been able to re-create the interstellar bands in a laboratory setting, mostly due to the difficulty of reproducing temperatures and pressure conditions the gas would experience in space.

If you’d like to learn more about the Gemini Observatory, visit: http://www.gemini.edu/
Read more about RIT’s Center for Detectors at: http://ridl.cis.rit.edu/

Source: Rochester Institute of Technology Press Release

TV Viewing Alert: New Mini-Series: Fabric of the Cosmos

A new 4-part mini-series debuts tonight on PBS station in the US, featuring theoretical physicist Brian Greene. The series is called “Fabric of the Cosmos” and is based on Greene’s 2004 book of the same name. It premieres tonight (Nov. 2, 2011) on NOVA, with subsequent episodes airing November 9, 16 and 23. The series will probe the most extreme realms of the cosmos, from black holes to dark matter, to time bending and parallel realities.

Check your local listings for time.

Large Hadron Collider Finishes 2011 Proton Run

A new loop will be added to CERN's Antiproton Decelerator in 2016 to increase antiproton production at low energies. Credit: CERN

[/caption]

The world’s largest and highest-energy particle accelerator has been busy. At 5:15 p.m. on October 30, 2011, the Large Hadron Collider in Geneva, Switzerland reached the end of its current proton run. It came after 180 consecutive days of operation and four hundred trillion proton collisions. For the second year, the LHC team has gone beyond its operational objectives – sending more experimental data at a higher rate. But just what has it done?

When this year’s project started, its goal was to produce a surplus of data known to physicists as one inverse femtobarn. While that might seem like a science fiction term, it’s a science fact. An inverse femtobarn is a measurement of particle collision events per femtobarn – which is equal to about 70 million million collisions. The first inverse femtobarn came on June 17th, and just in time to prepare the stage for major physics conferences requiring the data be moved up to five inverse femtobarns. The incredible number of collisions was reached on October 18, 2011 and then surpassed as almost six inverse femtobarns were delivered to each of the two general-purpose experiments – ATLAS and CMS.

“At the end of this year’s proton running, the LHC is reaching cruising speed,” said CERN’s Director for Accelerators and Technology, Steve Myers. “To put things in context, the present data production rate is a factor of 4 million higher than in the first run in 2010 and a factor of 30 higher than at the beginning of 2011.”

But that’s not all the LHC delivered this year. This year’s proton run also shut out the accessible hiding space for the highly prized Higgs boson and supersymmetric particles. This certainly put the Standard Model of particle physics and our understanding of the primordial Universe to the test!

“It has been a remarkable and exciting year for the whole LHC scientific community, in particular for our students and post-docs from all over the world. We have made a huge number of measurements of the Standard Model and accessed unexplored territory in searches for new physics. In particular, we have constrained the Higgs particle to the light end of its possible mass range, if it exists at all,” said ATLAS Spokesperson Fabiola Gianotti. “This is where both theory and experimental data expected it would be, but it’s the hardest mass range to study.”

“Looking back at this fantastic year I have the impression of living in a sort of a dream,” said CMS Spokesperson Guido Tonelli. “We have produced tens of new measurements and constrained significantly the space available for models of new physics and the best is still to come. As we speak hundreds of young scientists are still analysing the huge amount of data accumulated so far; we’ll soon have new results and, maybe, something important to say on the Standard Model Higgs Boson.”

“We’ve got from the LHC the amount of data we dreamt of at the beginning of the year and our results are putting the Standard Model of particle physics through a very tough test ” said LHCb Spokesperson Pierluigi Campana. “So far, it has come through with flying colours, but thanks to the great performance of the LHC, we are reaching levels of sensitivity where we can see beyond the Standard Model. The researchers, especially the young ones, are experiencing great excitement, looking forward to new physics.”

Over the next few weeks, the LHC will be further refining the 2011 data set with an eye to improving our understanding of physics. And, while it’s possible we’ll learn more from current findings, look for a leap to a full 10 inverse femtobarns which may yet be possible in 2011 and projected for 2012. Right now the LHC is being prepared for four weeks of lead-ion running… an “attempt to demonstrate that large can also be agile by colliding protons with lead ions in two dedicated periods of machine development.” If this new strand of LHC operation happens, science will soon be using protons to check out the internal machinations of much heftier structures – like lead ions. This directly relates to quark-gluon plasma, the surmised primordial conglomeration of ordinary matter particles from which the Universe evolved.

“Smashing lead ions together allows us to produce and study tiny pieces of primordial soup,” said ALICE Spokesperson Paolo Giubellino, “but as any good cook will tell you, to understand a recipe fully, it’s vital to understand the ingredients, and in the case of quark-gluon plasma, this is what proton-lead ion collisions could bring.”

Original Story Source: CERN Press Release.

Quantum Levitation And The Superconductor

Superconductivity and magnetic fields are like oil and water… they don’t mix. When it can, the superconductor will push out any magnetic fields from the interior in a process called the Meissner effect. It happens when a sample is cooled below its superconducting transition temperature, where it then cancels out its magnetic flux. What’s next? A superconductor. Now the fun really begins… Continue reading “Quantum Levitation And The Superconductor”

Special Relativity May Answer Faster-than-Light Neutrino Mystery

The relativistic motion of clocks on board GPS satellites exactly accounts for the superluminal effect, says physicist. Credit: axirv

[/caption]

Oh, yeah. Moving faster than the speed of light has been the hot topic in the news and OPERA has been the key player. In case you didn’t know, the experiment unleashed some particles at CERN, close to Geneva. It wasn’t the production that caused the buzz, it was the revelation they arrived at the Gran Sasso Laboratory in Italy around 60 nanoseconds sooner than they should have. Sooner than the speed of light allows!

Since the announcement, the physics world has been on fire, producing more than 80 papers – each with their own opinion. While some tried to explain the effect, others discredited it. The overpowering concensus was the OPERA team simply must have forgotten one critical element. On October 14, 2011, Ronald van Elburg at the University of Groningen in the Netherlands put forth his own statement – one that provides a persuasive point that he may have found the error in the calculations.

To get a clearer picture, the distance the neutrinos traveled is straightforward. They began in CERN and were measured via global positioning systems. However, the Gran Sasso Laboratory is located beneath the Earth under a kilometre-high mountain. Regardless, the OPERA team took this into account and provided an accurate distance measurement of 730 km to within tolerances of 20 cm. The neutrino flight time is then measured by using clocks at the opposing ends, with the team knowing exactly when the particles left and when they landed.

But were the clocks perfectly synchronized?

Keeping time is again the domain of the GPS satellites which each broadcasting a highly accurate time signal from orbit some 20,000km overhead. But is it possible the team overlooked the amount of time it took for the satellite signals to return to Earth? In his statement, van Elburg says there is one effect that the OPERA team seems to have overlooked: the relativistic motion of the GPS clocks.

Sure, radio waves travel at the speed of light, so what difference does the satellite position make? The truth is, it doesn’t.. but the time of flight does. Here we have a scenario where one clock is on the ground while the other is orbiting. If they are moving relative to one another, this calculation needs to be included in the findings. The orbiting probes are positioned from West to East in a plane inclined at 55 degrees to the equator… almost directly in line with the neutrino flight path. This means the clock on the GPS is seeing the neutrino source and detector as changing.

“From the perspective of the clock, the detector is moving towards the source and consequently the distance travelled by the particles as observed from the clock is shorter,” says van Elburg.

According to the news source, he means shorter than the distance measured in the reference frame on the ground and the OPERA team overlooks this because it thinks of the clocks as on the ground not in orbit. Van Elburg calculates that it should cause the neutrinos to arrive 32 nanoseconds early. But this must be doubled because the same error occurs at each end of the experiment. So the total correction is 64 nanoseconds, almost exactly what the OPERA team observes.

Is this the final answer for traveling faster than the speed of light? No. It’s just another possible answer to explain a new riddle… and a confirmation of a new revelation.

Original Story Source: Technology Review News Release. For Further Reading: Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?.

Astronomy Without A Telescope – Light Speed

The effect of time dilation is negligible for common speeds, such as that of a car or even a jet plane, but it increases dramatically when one gets close to the speed of light.

[/caption]

The recent news of neutrinos moving faster than light might have got everyone thinking about warp drive and all that, but really there is no need to imagine something that can move faster than 300,000 kilometres a second.

Light speed, or 300,000 kilometres a second, might seem like a speed limit, but this is just an example of 3 + 1 thinking – where we still haven’t got our heads around the concept of four dimensional space-time and hence we think in terms of space having three dimensions and think of time as something different.

For example, while it seems to us that it takes a light beam 4.3 years to go from Earth to the Alpha Centauri system, if you were to hop on a spacecraft going at 99.999 per cent of the speed of light you would get there in a matter of days, hours or even minutes – depending on just how many .99s you add on to that proportion of light speed.

This is because, as you keep pumping the accelerator of your imaginary star drive system, time dilation will become increasingly more pronounced and you will keep getting to your destination that much quicker. With enough .999s you could cross the universe within your lifetime – even though someone you left behind would still only see you moving away at a tiny bit less than 300,000 kilometres a second. So, what might seem like a speed limit at first glance isn’t really a limit at all.

The effect of time dilation is negligible for common speeds we are familiar with on Earth, but it increases dramatically and asymptotically as you approach the speed of light.

To try and comprehend the four dimensional perspective on this, consider that it’s impossible to move across any distance without also moving through time. For example, walking a kilometer may be a duration of thirty minutes – but if you run, it might only take fifteen minutes.

Speed is just a measure of how long it takes you reach a distant point. Relativity physics lets you pick any destination you like in the universe – and with the right technology you can reduce your travel time to that destination to any extent you like – as long as your travel time stays above zero.

That is the only limit the universe really imposes on us – and it’s as much about logic and causality as it is about physics. You can travel through space-time in various ways to reduce your travel time between points A and B – and you can do this up until you almost move between those points instantaneously. But you can’t do it faster than instantaneously because you would arrive at B before you had even left A.

If you could do that, it would create impossible causality problems – for example you might decide not to depart from point A, even though you’d already reached point B. The idea is both illogical and a breach of the laws of thermodynamics, since the universe would suddenly contain two of you.

So, you can’t move faster than light – not because of anything special about light, but because you can’t move faster than instantaneously between distant points. Light essentially does move instantaneously, as does gravity and perhaps other phenomena that we are yet to discover – but we will never discover anything that moves faster than instantaneously, as the idea makes no sense.

We mass-laden beings experience duration when moving between distant points – and so we are able to also measure how long it takes an instantaneous signal to move between distant points, even though we could never hope to attain such a state of motion ourselves.

We are stuck on the idea that 300,000 kilometres a second is a speed limit, because we intuitively believe that time runs at a constant universal rate. However, we have proven in many different experimental tests that time clearly does not run at a constant rate between different frames of reference. So with the right technology, you can sit in your star-drive spacecraft and make a quick cup of tea while eons pass by outside. It’s not about speed, it’s about reducing your personal travel time between two distant points.

As Woody Allen once said: Time is nature’s way of keeping everything from happening at once. Space-time is nature’s way of keeping everything from happening in the same place at once.

The Crab Gets Cooked With Gamma Rays

X-ray: NASA/CXC/ASU/J. Hester et al.; Optical: NASA/HST/ASU/J. Hester et al.; Radio: NRAO/AUI/NSF Image of the Crab Nebula combines visible light (green) and radio waves (red) emitted by the remnants of a cataclysmic supernova explosion in the year 1054. and the x-ray nebula (blue) created inside the optical nebula by a pulsar (the collapsed core of the massive star destroyed in the explosion). The pulsar, which is the size of a small city, was discovered only in 1969. The optical data are from the Hubble Space Telescope, and the radio emission from the National Radio Astronomy Observatory, and the X-ray data from the Chandra Observatory.

[/caption]

It’s one of the most famous sights in the night sky… and 957 years ago it was bright enough to be seen during the day. This supernova event was one of the most spectacular of its kind and it still delights, amazes and even surprises astronomers to this day. Think there’s nothing new to know about M1? Then think again…

An international collaboration of astrophysicists, including a group from the Department of Physics in Arts & Sciences at Washington University in St. Louis, has detected pulsed gamma rays coming from the heart of the “Crab”. Apparently the central neutron star is putting off energies that can’t quite be explained. These pulses between range 100 and 400 billion electronvolts (Gigaelectronvolts, or GeV), far higher than 25 GeV, the most energetic radiation recorded. To give you an example, a 400 GeV photon is almost a trillion times more energetic than a light photon.

“This is the first time very-high-energy gamma rays have been detected from a pulsar – a rapidly spinning neutron star about the size of the city of Ames but with a mass greater than that of the Sun,” said Frank Krennrich, an Iowa State professor of physics and astronomy and a co-author of the paper.

We can thank the Arizona based Very Energetic Radiation Imaging Telescope Array System (VERITAS) array of four 12-meter Cherenkov telescopes covered in 350 mirrors for the findings. It is continually monitoring Earth’s atmosphere for the fleeting signals of gamma-ray radiation. However, findings like these on such a well-known object is nearly unprecedented.

“We presented the results at a conference and the entire community was stunned,” says Henric Krawczynski, PhD, professor of physics at Washington University. The WUSTL group led by James H. Buckley, PhD, professor of physics, and Krawczynski is one of six founding members of the VERITAS consortium.

An X-ray image of the Crab Nebula and pulsar. Image by the Chandra X-ray Observatory, NASA/CXC/SAO/F. Seward.

We know the Crab’s story and how its pulsar sweeps around like a lighthouse… But Krennrich said such high energies can’t be explained by the current understanding of pulsars. Not even curvature radiation can be at the root of these gamma-ray emissions.

“The pulsar in the center of the nebula had been seen in radio, optical, X-ray and soft gamma-ray wavelengths,” says Matthias Beilicke, PhD, research assistant professor of physics at Washington University. “But we didn’t think it was radiating pulsed emissions above 100 GeV. VERITAS can observe gamma-rays between100 GeV and 30 trillion electronvolts (Teraelectronvolts or TeV).”

Just enough to cook one crab… well done!

Original Story Source: Iowa State University News Release. For Further Reading: Washington University in St. Louis News Release.