Galactic Archaeology: NGC 5907 – The Dragon Clash

NGC 5907 - Credit: R. Jay Gabany

[/caption]

The sprawling northern constellation of Draco is home to a monumental galactic merger which left a singular spectacle – NGC 5907. Surrounded by an ethereal garment of wispy star trails and currents of stellar material, this spiral galaxy is the survivor of a “clash of the dragons” which may have occurred some 8 to 9 billion years ago. Recent theory suggests galaxies of this type may be the product of a larger galaxy encountering a smaller satellite – but this might not be the case. Not only is NGC 5907 a bit different in some respects, it’s a lot different in others… and peculiar motion is just the beginning.

“If the disc of many spirals is indeed rebuilt after a major merger, it is expected that tidal tails can be a fossil record and that there should be many loops and streams in their halos. Recently Martínez-Delgado et al. (2010) have conducted a pilot survey of isolated spiral galaxies in the Local Volume up to a low surface brightness sensitivity of ~28.5 mag/arcsec2 in V band. They find that many of these galaxies have loops or streams of various shapes and interpret these structures as evidence of minor merger or satellite infall.” says J. Wang of the Chinese Academy of Sciences. “However, if these loops are caused by minor mergers, the residual of the satellite core should be detected according to numerical simulations. Why is it hardly ever detected?”

The “why” is indeed the reason NGC 5907 is being intensively studied by a team of six scientists of the Observatoire de Paris, CNRS, Chinese Academy of Sciences, National Astronomical Observatories of China NAOC and Marseille Observatory. Even though NGC 5907 is a member of a galactic group, there are no galaxies near enough to it to be causing an interaction which could account for its streamers of stars. It is truly a warped galaxy with gaseous and stellar disks which extend beyond the nominal cut-off radius. But that’s not all… It also has a peculiar halo which includes a significant fraction of metal enriched stars. NGC 5907 just doesn’t fit the patterns.

“For some of our models, we assume a star formation history with a varying global efficiency in transforming gas to stars, in order to preserve enough gas from being consumed before fusion.” explains the research team. “Although this fine-tuned star formation history may have some physical motivations, its main role is also to ensure the formation of stars after the emergence of the gaseous disc just after fusion.”

On left, the NGC 5907 galaxy. It is compared to the simulations, on right. Both cases show an edge-on galactic disk surrounded by giant loops of old stars, which are witnessing of a former, gigantic collision. (Jay Gabany, cosmotography.com / Observatoire de Paris / CNRS / Pythéas / NAOC)

Now enter the 32- and 196-core computers at the Paris Observatory center and the 680-core Graphic Processor Unit supercomputer of Beijing NAOC with the capability to run 50000 billion operations per second. By employing several state of the art, hydrodynamical, and numerical simulations with particle numbers ranging from 200 000 to 6 millions, the team’s goal was to show the structure of NGC 5907 may have been the result of the clash of two dragon-sized galaxies… or was it?

“The exceptional features of NGC 5907 can be reproduced, together with the central galaxy properties, especially if we compare the observed loops to the high-order loops expected in a major merger model.” says Wang. “Given the extremely large number of parameters, as well as the very numerous constraints provided by the observations, we cannot claim that we have already identified the exact and unique model of NGC 5907 and its halo properties. We nevertheless succeeded in reproducing the loop geometry, and a disc-dominated, almost bulge-less galaxy.”

In the meantime, major galaxy merger events will continue to be a top priority in formation research. “Future work will include modelling other nearby spiral galaxies with large and faint, extended features in their halos.” concludes the team. “These distant galaxies are likely similar to the progenitors, six billion years ago, of present-day spirals, and linking them together could provide another crucial test for the spiral rebuilding disc scenario.”

And sleeping dragons may one day arise…

Original Story Source: Paris Observatory News. For Further Reading: Loops formed by tidal tails as fossil records of a major merger and Fossils of the Hierarchical Formation of the Nearby Spiral Galaxy NGC 5907.

Faster Than Light? More Like Faulty Wiring.

Image credit: CORBIS/CERN

[/caption]

You can shelf your designs for a warp drive engine (for now) and put the DeLorean back in the garage; it turns out neutrinos may not have broken any cosmic speed limits after all.

Ever since the news came out on September 22 of last year that a team of researchers in Italy had clocked neutrinos traveling faster than the speed of light, the physics world has been resounding with the potential implications of such a discovery — that is, if it were true. The speed of light has been a key component of the standard model of physics for over a century, an Einstein-established limit that particles (even tricky neutrinos) weren’t supposed to be able to break, not even a little.

Now, according to a breaking news article by Edwin Cartlidge on AAAS’ ScienceInsider, the neutrinos may be cleared of any speed violations.

“According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer,” Cartlidge reported.

The original OPERA (Oscillation Project with Emulsion-tRacking Apparatus) experiment had a beam of neutrinos fired from CERN in Geneva, Switzerland, aimed at an underground detector array located 730 km away at the Gran Sasso facility, near L’Aquila, Italy. Researchers were surprised to discover the neutrinos arriving earlier than expected, by a difference of 60 nanoseconds. This would have meant the neutrinos had traveled faster than light speed to get there.

Repeated experiments at the facility revealed the same results. When the news was released, the findings seemed to be solid — from a methodological standpoint, anyway.

Shocked at their own results, the OPERA researchers were more than happy to have colleagues check their results, and welcomed other facilities to attempt the same experiment.

Repeated attempts may no longer be needed.

Once the aforementioned fiber optic cable was readjusted, it was found that the speed of data traveling through it matched the 60 nanosecond discrepancy initially attributed to the neutrinos. This could very well explain the subatomic particles’ apparent speed burst.

Case closed? Well… it is science, after all.

“New data,” Cartlidge added, “will be needed to confirm this hypothesis.”

See the original OPERA team paper here.

_______________________

UPDATE 2/22/12 11:48 pm EST: According to a more recent article on Nature’s newsblog, the Science Insider report erroneously attributed the 60 nanosecond discrepancy to loose fiber optic wiring from the GPS unit, based on inside “sources”. OPERA’s statement doesn’t specify as such, “saying instead that its two possible sources of error point in opposite directions and it is still working things out.”

OPERA’s official statement released today is as follows:

“The OPERA Collaboration, by continuing its campaign of verifications on the neutrino velocity measurement, has identified two issues that could significantly affect the reported result. The first one is linked to the oscillator used to produce the events time-stamps in between the GPS synchronizations. The second point is related to the connection of the optical fiber bringing the external GPS signal to the OPERA master clock.

These two issues can modify the neutrino time of flight in opposite directions. While continuing our investigations, in order to unambiguously quantify the effect on the observed result, the Collaboration is looking forward to performing a new measurement of the neutrino velocity as soon as a new bunched beam will be available in 2012. An extensive report on the above mentioned verifications and results will be shortly made available to the scientific committees and agencies.” (via Nature newsblog.)

Skydiver Prepares for Record-Setting Freefall from the Edge of Space

Baumgartner, left with Joe Kittinger. Credit: Red Bull Stratos

In 2010, we reported on Felix Baumgartner and his upcoming attempt to break the sound barrier with his body, in a freefall from the edge of space. Part science experiment, part publicity stunt, part life-long ambition, the Red Bull Stratos mission will have Baumgartner traveling inside a capsule with a stratospheric balloon to 36,500 meters (120,000 feet), where he will step out and attempt a record-setting highest freefall jump ever. The mission was delayed by two years by a lawsuit, but Baumgartner’s jump is now back on, and will be attempted later this year, perhaps late summer or early fall 2012.

If Baumgartner is successful, the mission will break four world records: the altitude record for freefall, the distance record for longest freefall, the speed record for fastest freefall by breaking the speed of sound with the human body, and the altitude record for the highest manned balloon flight.

“This is the biggest goal I can dream of,” Baumgartner said. “If we can prove that you can break the speed of sound and stay alive I think that is a benefit for future space exploration.”

Above is a video of some of the preparations to test Baumgarter’s pressure suit and his body’s reaction to what he will endure during the freefall. The pressurized “space” suit and helmet supplies 20 minutes of oxygen includes especially designed equipment developed to capture data throughout the mission for the medical and scientific advancement of human flight.

The speed of sound — historically called the ‘sound barrier’ – has been broken by rockets, various jet-powered aircraft and rocket-boosted land vehicles. No one has broken it yet with just their body.

[/caption]

Back in 1960, a US Air Force captain named Joe Kittinger made aerospace history by making a jump from 31,000 meters (102,800 feet) in what was called project Excelsior. His jump contributed valuable data that provided ground work for spacesuit technology and knowledge about human physiology for the US space program. There have been several attempts to surpass Kittinger’s record, but none have succeeded, and people have given their lives for the quest.

Kittinger has been working with Baumgartner to help him prepare for the jump.

The Red Bull Stratos mission is named after the energy drink company that is sponsoring the jump by the renowned Austrian skydiver. Red Bull Stratos team members say the mission will explore the limits of the human body in one of the most hostile environments known to humankind, in the attempt to deliver valuable lessons in human endurance and high-altitude technology.

The lawsuit that halted the jump was made by Daniel Hogan, who claimed he pitched the idea of breaking the 50-year old freefall record to Red Bull in 2004, and that Red Bull said they weren’t interested, but later, the company went forward with the idea. Hogan filed a multi-million dollar lawsuit against the energy drink company, but the two parties settled out of court.

The delay may have been a good thing, however. Baumgartner revealed that in December 2010 during first pressure tests of the suit, he had a panic attack, an event which he called “the worst moment of his life.”

Baumgartner entering the pressure test capsule. Credit: Red Bull Stratos

“When it came to the crucial pressure test at -60°C, under real conditions with pressure and altitude simulated, and surrounded by cameras, air force personnel and scientists, I realized I just couldn’t do it,” Baumgarter said in an article in the Red Bulletin.

Baumgartner said he thought the suit should feel like ‘second skin’ but instead he felt like his movements and perceptions were restricted. “As soon as the visor closes there’s this nightmarish silence and loneliness – the suit signifies imprisonment. We hadn’t originally conceived of a test that confined me in the suit for five hours – that’s how long the entire mission should take – with the visor closed. After all my past exploits, all the extreme things I’ve done in my career, no one would have ever guessed that simply wearing a space suit would threaten the mission, me included. In the end, the symptoms developed into panic attacks.”

Baumgartner during a test flight. Credit: Red Bull Stratos

But Baumgarter has been able to overcome the panic attacks and now is moving forward with the preparations for the jump. The jump will be recorded for a documentary with 15 cameras onboard the capsule and three cameras on Baumgarter’s body. The documentary will be produced by the BBC together with the National Geographic channel, with a feature-length film airing on the two channels following the jump.

The mission will take place in Roswell New Mexico because of the favorable conditions. The area is sparsely populated, plus it has some of the world’s best facilities for balloon launches such as this, and the weather allows several good windows for a successful launch.

For more information, see the Red Bull Stratos website, and the Red Bulletin.

Take a look at the infographic about the jump.

Dancing Water Drops In Earth Orbit

An astronaut once told me that fellow space flier Don Pettit could fix anything with a paper clip. Indeed, Pettit has nicknames like Mr. Wizard and Mr. Fixit, and he is well-known for his Saturday Morning Science videos during his first stay on the International Space Station and his “Zero G Coffee Cup” from a space shuttle mission he was on in 2008. Now in his second long-duration stint on the ISS, Pettit has a new video series called “Science off the Sphere” and the first video is above. Pettit uses “knittin” needles (watch the video to hear Pettit’s pronunciation) and water droplets to demonstrate physics in space, and shows what fun astronauts can have with water in zero-G with his ‘dancing’ water droplets.

This new video series is partnership between NASA and the American Physical Society. But there’s more than just videos, as at the end of each video Pettit poses a challenge question. Submit your answers at the Science Off the Sphere website for a chance to have your name read from space and receive a snazzy t-shirt from Earth.

Here’s this week’s Challenge Question:
Continue reading “Dancing Water Drops In Earth Orbit”

Journal Club – Neutrino Vision

Today's Journal Club is about a new addition to the Standard Model of fundamental particles.

[/caption]

According to Wikipedia, a journal club is a group of individuals who meet regularly to critically evaluate recent articles in the scientific literature. And of course, the first rule of Journal Club is… don’t talk about Journal Club.

So, without further ado – today’s journal article is about the latest findings in neutrino astronomy.

Today’s article:
Gaisser Astrophysical neutrino results..

This paper presents some recent observations from the IceCube neutrino telescope at the South Pole – which acually observes neutrinos from the northern sky – using the Earth to filter out some of the background noise. Cool huh?

Firstly, a quick recap of neutrino physics. Neutrinos are sub-atomic particles of the lepton variety and are essentially neutrally charged versions of the other leptons – electrons, muons and taus – which all have a negative charge. So, we say that neutrinos come in three flavours – electron neutrinos, muon neutrinos and tau neutrinos.

Neutrinos were initially proposed by Pauli (a proposal later refined by Fermi) to explain how energy could be transported away from a system undergoing beta decay. When solar fusion began to be understood in the 1930s – the role of neutrinos was problematic since only a third or more of the neutrinos that were predicted to be produced by fusion were being detected – an issue which became known as the solar neutrino problem in the 1960’s.

The solar neutrino problem was only resolved in the late 1990s when the three neutrino flavours idea gained wide acceptance and each were finally detected in 2001 – confirming that solar neutrinos in transit actually oscillate between the three flavours (electron, muon and tau) – which means that if your detector is set up to detect only one flavour you will detect only about one third of all the neutrinos coming from the Sun.

Ten years later, the Ice Cube the neutrino observatory is using our improved understanding of neutrinos to try and detect high energy neutrinos of extragalactic origin. The first challenge is to distinguish atmospheric neutrinos (produced in abundance as cosmic rays strike the atmosphere) from astrophysical neutrinos.

Using what we have learnt from solving the solar neutrino problem, we can be confident that any neutrinos from distant sources have had time to oscillate – and hence should arrive at Earth in approximately equal ratios. Atmospheric neutrinos produced from close sources (also known as ‘prompt’ neutrinos) don’t have time to oscillate before being detected.

When looking for point sources of high energy astrophysical neutrinos, IceCube is most sensitive to muon neutrinos – which are detected when the neutrino weakly interacts with an ice molecule – emitting a muon. A high energy muon will then generate Cherenkov radiation – which is what IceCube actually detects. Unfortunately muon neutrinos are also the most common source of cosmic ray induced atmospheric neutrinos, but we are steadily getting better at determining what energy levels represent astrophysical rather than atmospheric neutrinos.

So, it’s still early days with this technology – with much of the effort going in to learning how to observe, rather than just observing. But maybe one day we will be observing the cosmic neutrino background – and hence the first second of the Big Bang. One day…

So… comments? Are neutrinos the fundamentally weirdest fundamental particle out there? Could IceCube be used to test the faster-than-light neutrino hypothesis? Want to suggest an article for the next edition of Journal Club?

Recycling Pulsars – The Millisecond Matters…

An artist's impression of an accreting X-ray millisecond pulsar. The flowing material from the companion star forms a disk around the neutron star which is truncated at the edge of the pulsar magnetosphere. Credit: NASA / Goddard Space Flight Center / Dana Berry

[/caption]

It’s a millisecond pulsar… a rapidly rotating neutron star and it’s about to reach the end of its mass gathering phase. For ages the vampire of this binary system has been sucking matter from a donor star. It has been busy, spinning at incredibly high rotational speeds of about 1 to 10 milliseconds and shooting off X-rays. Now, something is about to happen. It is going to lose a whole lot of energy and age very quickly.

Astrophysicist Thomas Tauris of Argelander-Institut für Astronomie and Max-Planck-Institut für Radioastronomie has published a paper in the February 3 issue of Science where he has shown through numerical equations the root of stellar evolution and accretion torques. In this model, millisecond pulsars are shown to dissipate approximately half of their rotational energy during the last phase of the mass-transfer process and just before it turns into a radio source. Dr. Tauris’ findings are consistent with current observations and his conclusions also explain why a radio millisecond pulsar appears age-advanced over their companion stars. This may be the answer as to why sub-millisecond pulsars don’t exist at all!

“Millisecond pulsars are old neutron stars that have been spun up to high rotational frequencies via accretion of mass from a binary companion star.” says Dr. Tauris. “An important issue for understanding the physics of the early spin evolution of millisecond pulsars is the impact of the expanding magnetosphere during the terminal stages of the mass-transfer process.”

By drawing mass and angular momentum from a host star in a binary system, a millisecond pulsar lives its life as a highly magnetized, old neutron star with an extreme rotational frequency. While we might assume they are common, there are only about 200 of these pulsar types known to exist in galactic disk and globular clusters. The first of these millisecond pulsars was discovered in 1982. What counts are those that have spin rates between 1.4 to 10 milliseconds, but the mystery lay in why they have such rapid spin rates, their strong magnetic fields and their strangely appearing ages. For example, when do they switch off? What happens to the spin rate when the donor star quits donating?

“We have now, for the first time, combined detailed numerical stellar evolution models with calculations of the braking torque acting on the spinning pulsar”, says Thomas Tauris, the author of the present study. “The result is that the millisecond pulsars lose about half of their rotational energy in the so-called Roche-lobe decoupling phase. This phase is describing the termination of the mass transfer in the binary system. Hence, radio-emitting millisecond pulsars should spin slightly slower than their progenitors, X-ray emitting millisecond pulsars which are still accreting material from their donor star. This is exactly what the observational data seem to suggest. Furthermore, these new findings can help explain why some millisecond pulsars appear to have characteristic ages exceeding the age of the Universe and perhaps why no sub-millisecond radio pulsars exist.”

Thanks to this new study we’re now able to see how a spinning pulsar could possibly brake out of an equilibrium spin. At this age, the mass-transfer rate slows down and affects the magnetospheric radius of the pulsar. This in turn expands and forces the incoming matter to act as a propeller. The action then causes the pulsar to slow down its rotation and – in turn – slow its spin rate.

“Actually, without a solution to the “turn-off” problem we would expect the pulsars to even slow down to spin periods of 50-100 milliseconds during the Roche-lobe decoupling phase”, concludes Thomas Tauris. “That would be in clear contradiction with observational evidence for the existence of millisecond pulsars.”

Original Story Source: Max-Planck-Institut für Radioastronomie News Release>. For Further Reading: Spin-Down of Radio Millisecond Pulsars at Genesis.

Could a ‘Death Star’ Really Destroy a Planet?

The Death Star. Image Credit: Wookieepedia / Lucasfilm

[/caption]Countless Sci-Fi fans vividly remember the famous scene in Star Wars in which the Death Star obliterates the planet Alderaan.

Mirroring many late night caffeine-fueled arguments among Sci-Fi fans, a University of Leicester researcher asks the question:

Could a small moon-sized battle station generate enough energy to destroy an Earth-sized planet?

A paper by David Boulderston (University of Leicester) sets out to answer that very question. First, for the uninitiated, just what the heck is a Death Star?

According to Star Wars lore, the DS-1 Orbital Battle Station, or Death Star, is a moon-sized battle station designed to spread fear throughout the galaxy. The image above shows the Death Star as it appeared in Star Wars Episode IV: A New Hope (1977). The Death Star’s main weapon is depicted as a superlaser capable of destroying planets with a single blast.

Boulderston claims that it is possible to estimate how much energy the Death Star would need in order to destroy a planet with its superlaser. There are a number of assumptions made, however, in order to come up with the energy requirement.

For starters, Boulderston assumed that Alderaan did not have any sort of planetary “deflector” shield. A second assumption is that the planet is a solid body of uniform density – essentially ignoring the complex interior of planets, due to lack of information on Alderaan itself. Using the idealized sphere model based on Earth’s mass and diameter, it was possible to determine the gravitational binding energy of Alderaan, using a simple equation of:

U= 3GMp2
——
5Rp

Where G is the Gravitational Constant (6.673×10-11), Mp is planet mass, and Rp is the planet’s radius. Using Earth’s mass and radius, the required energy comes out to 2.25 x 1032 Joules. Using Jupiter’s data, the energy required goes up to 2 x 1036 Joules.

Boulderston asserts that (according to Star Wars lore) the Death Star is powered by a ‘hypermatter’ reactor, possessing the energy output of several main-sequence stars. Given that the power output of our Sun is about 3 x 1026 Joules per second, it’s a reasonable assumption the Death Star’s reactor could power the superlaser.

Despite using a simplified model of a planet, Boulderstone states the simplified model is reasonable to use since the Death Star’s main power reactor has the energy output equal to several main-sequence stars. Even if Earth’s exact composition were used in the equation above, the required energy to destroy a planet would only be affected by a few orders of magnitude – well within the Death Star’s power budget.

Boulderstone reiterated that the energy required to destroy a Jupiter-sized planet would put considerable strain on the Death Star. To destroy a planet like Jupiter, all power from essential systems and life support (no re-routing from the auxiliary EPS conduits – that’s a Star Trek hack!) would be required, which is not necessarily possible.

Boulderstone’s conclusion is that the Death Star could indeed destroy Earth-like planets, given its main power source. While the Death Star could destroy an Earth-sized planet, a Jupiter-sized planet would be a tough challenge, and the Galactic Empire would need to resort to using a Suncrusher to destroy stars.

If you’d like to read Boulderstone’s paper, you can access it at: https://physics.le.ac.uk/journals/index.php/pst/article/view/328/195

Guest Post: The Cosmic Energy Inventory

The Cosmic Energy Inventory chart by Markus Pössel. Click for larger version.

[/caption]

Now that the old year has drawn to a close, it’s traditional to take stock. And why not think big and take stock of everything there is?

Let’s base our inventory on energy. And as Einstein taught us that energy and mass are equivalent, that means automatically taking stock of all the mass that’s in the universe, as well – including all the different forms of matter we might be interested in.

Of course, since the universe might well be infinite in size, we can’t simply add up all the energy. What we’ll do instead is look at fractions: How much of the energy in the universe is in the form of planets? How much is in the form of stars? How much is plasma, or dark matter, or dark energy?


The chart above is a fairly detailed inventory of our universe. The numbers I’ve used are from the article The Cosmic Energy Inventory by Masataka Fukugita and Jim Peebles, published in 2004 in the Astrophysical Journal (vol. 616, p. 643ff.). The chart style is borrowed from Randall Munroe’s Radiation Dose Chart over at xkcd.

These fractions will have changed a lot over time, of course. Around 13.7 billion years ago, in the Big Bang phase, there would have been no stars at all. And the number of, say, neutron stars or stellar black holes will have grown continuously as more and more massive stars have ended their lives, producing these kinds of stellar remnants. For this chart, following Fukugita and Peebles, we’ll look at the present era. What is the current distribution of energy in the universe? Unsurprisingly, the values given in that article come with different uncertainties – after all, the authors are extrapolating to a pretty grand scale! The details can be found in Fukugita & Peebles’ article; for us, their most important conclusion is that the observational data and their theoretical bases are now indeed firm enough for an approximate, but differentiated and consistent picture of the cosmic inventory to emerge.

Let’s start with what’s closest to our own home. How much of the energy (equivalently, mass) is in the form of planets? As it turns out: not a lot. Based on extrapolations from what data we have about exoplanets (that is, planets orbiting stars other than the sun), just one part-per-million (1 ppm) of all energy is in the form of planets; in scientific notation: 10-6. Let’s take “1 ppm” as the basic unit for our first chart, and represent it by a small light-green square. (Fractions of 1 ppm will be represented by partially filled such squares.) Here is the first box (of three), listing planets and other contributions of about the same order of magnitude:

So what else is in that box? Other forms of condensed matter, mainly cosmic dust, account for 2.5 ppm, according to rough extrapolations based on observations within our home galaxy, the Milky Way. Among other things, this is the raw material for future planets!

For the next contribution, a jump in scale. To the best of our knowledge, pretty much every galaxy contains a supermassive black hole (SMBH) in its central region. Masses for these SMBHs vary between a hundred thousand times the mass of our Sun and several billion solar masses. Matter falling into such a black hole (and getting caught up, intermittently, in super-hot accretion disks swirling around the SMBHs) is responsible for some of the brightest phenomena in the universe: active galaxies, including ultra high-powered quasars. The contribution of matter caught up in SMBHs to our energy inventory is rather modest, though: about 4 ppm; possibly a bit more.

Who else is playing in the same league? The sum total of all electromagnetic radiation produced by stars and by active galaxies (to name the two most important sources) over the course of the last billions of years, to name one: 2 ppm. Also, neutrinos produced during supernova explosions (at the end of the life of massive stars), or in the formation of white dwarfs (remnants of lower-mass stars like our Sun), or simply as part of the ordinary fusion processes that power ordinary stars: 3.2 ppm all in all.

Then, there’s binding energy: If two components are bound together, you will need to invest energy in order to separate them. That’s why binding energy is negative – it’s an energy deficit you will need to overcome to pry the system’s components apart. Nuclear binding energy, from stars fusing together light elements to form heavier ones, accounts for -6.3 ppm in the present universe – and the total gravitational binding energy accumulated as stars, galaxies, galaxy clusters, other gravitationally bound objects and the large-scale structure of the universe have formed over the past 14 or so billion years, for an even larger -13.4 ppm. All in all, the negative contributions from binding energy more than cancel out all the positive contributions by planets, radiation, neutrinos etc. we’ve listed so far.

Which brings us to the next level. In order to visualize larger contributions, we need a change scale. In box 2, one square will represent a fraction of 1/20,000 or 0.00005. Put differently: Fifty of the little squares in the first box correspond to a single square in the second box:

So here, without further ado, is box 2 (including, in the upper right corner, a scale model of the first box):

Now we are in the realm of stars and related objects. By measuring the luminosity of galaxies, and using standard relations between the masses and luminosity of stars (“mass-to-light-ratio”), you can get a first estimate for the total mass (equivalently: energy) contained in stars. You’ll also need to use the empirical relation (“initial mass function”) for how this mass is distributed, though: How many massive stars should there be? How many lower-mass stars? Since different stars have different lifetimes (live massively, die young), this gives estimates for how many stars out there are still in the prime of life (“main sequence stars”) and how many have already died, leaving white dwarfs (from low-mass stars), neutron stars (from more massive stars) or stellar black holes (from even more massive stars) behind. The mass distribution also provides you with an estimate of how much mass there is in substellar objects such as brown dwarfs – objects which never had sufficient mass to make it to stardom in the first place.

Let’s start small with the neutron stars at 0.00005 (1 square, at our current scale) and the stellar black holes (0.00007). Interestingly, those are outweighed by brown dwarfs which, individually, have much less mass, but of which there are, apparently, really a lot (0.00014; this is typical of stellar mass distribution – lots of low-mass stars, much fewer massive ones.) Next come white dwarfs as the remnants of lower-mass stars like our Sun (0.00036). And then, much more than all the remnants or substellar objects combined, ordinary, main sequence stars like our Sun and its higher-mass and (mostly) lower-mass brethren (0.00205).

Interestingly enough, in this box, stars and related objects contribute about as much mass (or energy) as more undifferentiated types of matter: molecular gas (mostly hydrogen molecules, at 0.00016), hydrogen and helium atoms (HI and HeI, 0.00062) and, most notably, the plasma that fills the void between galaxies in large clusters (0.0018) add up to a whopping 0.00258. Stars, brown dwarfs and remnants add up to 0.00267.

Further contributions with about the same order of magnitude are survivors from our universe’s most distant past: The cosmic background radiation (CMB), remnant of the extremely hot radiation interacting with equally hot plasma in the big bang phase, contributes 0.00005; the lesser-known cosmic neutrino background, another remnant of that early equilibrium, contributes a remarkable 0.0013. The binding energy from the first primordial fusion events (formation of light elements within those famous “first three minutes”) gives another contribution in this range: -0.00008.

While, in the previous box, the matter we love, know and need was not dominant, it at least made a dent. This changes when we move on to box 3. In this box, one square corresponds to 0.005. In other words: 100 squares from box 2 add up to a single square in box 3:

Box 3 is the last box of our chart. Again, a scale model of box 2 is added for comparison: All that’s in box 2 corresponds to one-square-and-a-bit in box 3.

The first new contribution: warm intergalactic plasma. Its presence is deduced from the overall amount of ordinary matter (which follows from measurements of the cosmic background radiation, combined with data from surveys and measurements of the abundances of light elements) as compared with the ordinary matter that has actually been detected (as plasma, stars, e.g.). From models of large-scale structure formation, it follows that this missing matter should come in the shape (non-shape?) of a diffuse plasma, which isn’t dense (or hot) enough to allow for direct detection. This cosmic filler substance amounts to 0.04, or 85% of ordinary matter, showing just how much of a fringe phenomena those astronomical objects we usually hear and read about really are.

The final two (dominant) contributions come as no surprise for anyone keeping up with basic cosmology: dark matter at 23% is, according to simulations, the backbone of cosmic large-scale structure, with ordinary matter no more than icing on the cake. Last but not least, there’s dark energy with its contribution of 72%, responsible both for the cosmos’ accelerated expansion and for the 2011 physics Nobel Prize.

Minority inhabitants of a part-per-million type of object made of non-standard cosmic matter – that’s us. But at the same time, we are a species, that, its cosmic fringe position notwithstanding, has made remarkable strides in unravelling the big picture – including the cosmic inventory represented in this chart.

__________________________________________

Here is the full chart for you to download: the PNG version (1200×900 px, 233 kB) or the lovingly hand-crafted SVG version (29 kB).

The chart “The Cosmic Energy Inventory” is licensed under Creative Commons BY-NC-SA 3.0. In short: You’re free to use it non-commercially; you must add the proper credit line “Markus Pössel [www.haus-der-astronomie.de]”; if you adapt the work, the result must be available under this or a similar license.

Technical notes: As is common in astrophysics, Fukugita and Peebles give densities as fractions of the so-called critical density; in the usual cosmological models, that density, evaluated at any given time (in this case: the present), is critical for determining the geometry of the universe. Using very precise measurements of the cosmic background radiation, we know that the average density of the universe is indistinguishable from the critical density. For simplicity’s sake, I’m skipping this detour in the main text and quoting all of F & P’s numbers as “fractions of the universe’s total energy (density)”.

For the supermassive black hole contributions, I’ve neglected the fraction ?n in F & P’s article; that’s why I’m quoting a lower limit only. The real number could theoretically be twice the quoted value; it’s apparently more likely to be close to the value given here, though. For my gravitational binding energy, I’ve added F & P’s primeval gravitational binding energy (no. 4 in their list) and their binding energy from dissipative gravitational settling (no. 5).

The fact that the content of box 3 adds up not quite to 1, but to 0.997, is an artefact of rounding not quite consistently when going from box 2 to box 3. I wanted to keep the sum of all that’s in box 2 at the precision level of that box.

Journal Club – This new Chi b (3P) thingy

Today's Journal Club is about a new addition to the Standard Model of fundamental particles.

[/caption]

According to Wikipedia, a Journal Club is a group of individuals who meet regularly to critically evaluate recent articles in the scientific literature. Since this is Universe Today if we occasionally stray into critically evaluating each other’s critical evaluations, that’s OK too.

And of course, the first rule of Journal Club is… don’t talk about Journal Club. So, without further ado – today’s journal article is about a new addition to the Standard Model of fundamental particles.

The good folk at the CERN Large Hadron Collider finished off 2011 with some vague murmurings about the Higgs Boson – which might have been kind-of sort-of discovered in the data already, but due to the degree of statistical noise around it, no-one’s willing to call it really found yet.

Since there is probably a Nobel prize in it – this seems like a good decision. It is likely that a one-way-or-the-other conclusion will be possible around this time next year – either because collisions to be run over 2012 reveal some critical new data, or because someone sifting through the mountain of data already produced will finally nail it.

But in the meantime, they did find something in 2011. There is a confirmed Observation of a new chi_b state in radiative transitions to Upsilon(1S) and Upsilon(2S) at the ATLAS experiment – or, in a nutshell… we hit Bottomonium.

In the lexicon of sub-atomic particle physics, the term Quarkonium is used to describe a particle whose constituents comprise a quark and its own anti-quark. So for example you can have Charmonium (a charm quark and a charm anti-quark) and you can have Bottomonium (a bottom quark and a bottom anti-quark).

The new Chi b (3P) particle has been reported as a boson – which is technically correct, since it has integer spin, while fermions (hadrons and leptons) have half spins. But it’s not an elementary boson like photons, gluons or the (theoretical) Higgs – it’s a composite boson composed of quarks. So, it is perhaps less confusing to consider it a meson (which is a bosonic hadron). Like other mesons, Chi b (3P) is a hadron that would not be commonly found in nature. It just appears briefly in particle accelerator collisions before it decays.

So comments? Has the significance of this new finding been muted because the discoverers thought it would just prompt a lot of bottom jokes? Is Chi_b (3P) the ‘Claytons Higgs’ (the boson you have when you’re not having a Higgs?). Want to suggest an article for the next edition of Journal Club?

Otherwise, have a great 2012.

Today’s article:
The ATLAS collaboration Observation of a new chi_b state in radiative transitions to Upsilon(1S) and Upsilon(2S) at the ATLAS experiment.

Slower than Light Neutrinos

The first annotated neutrino event. Image credit:

[/caption]

Earlier this year, an international team of scientists announced they had found neutrinos — tiny particles with an equally tiny but non-zero mass — traveling faster than the speed of light. Unable to find a flaw themselves, the team put out a call for physicists worldwide to check their experiment. One physicist who answered the call was Dr. Ramanath Cowsik. He found a potentially fatal flaw in the experiment that challenged the existence of faster than light neutrinos. 

Superluminal (faster than light) neutrinos were the result of the OPERA experiment, a collaboration between the CERN physics laboratory in Geneva, Switzerland, and the Laboratori Nazionali del Gran Sasso in Gran Sasso, Italy.

Scientists at CERN managed to repeat their result of neutrinos travelling faster than the speed of light. Image credit: Cern/Science Photo Library

The experiment timed neutrinos as they traveled 730 kilometres (about 450 miles) through Earth from their origin point at CERN to a detector in Gran Sasso. The team was shocked to find that the neutrinos arrived at Gran Sasso 60 nanoseconds sooner than they would have if they were traveling at the speed of light in a vacuum. In short, they appeared to be superluminal.

This result created either a problem for physics or a breakthrough. According to Einstein’s theory of special relativity, any particle with mass can come close to the speed of light but can’t reach it. Since neutrinos have mass, superluminal neutrinos shouldn’t exist. But, somehow, they did.

But Cowsik questioned the neutrinos’ genesis. The OPERA experiments generated neutrinos by slamming protons into a stationary target. This produced a pulse of pions, unstable particles that were magnetically focused into a tunnel where they decayed into neutrinos and muons (another tiny elementary particle). The muons never went further than the tunnel, but the neutrinos, which can slip through matter like a ghost passes through a wall, kept going towards Gran Sasso.

The creation of a neutrino and a muon. Image credit: J. Sonier

Cowsik’s and his team looked closely at this first step of the OPERA experiment. They investigated whether “pion decays would produce superluminal neutrinos, assuming energy and momentum are conserved,” he said. The OPERA neutrinos had a lot of energy but very little mass, so the question was whether they could really move faster than light.

What Cowsik and his team found was that if neutrinos produced from a pion decay were traveling faster than light, the pion lifetime would get longer and each neutrino would carry a smaller fraction of the energy it shares with the muon. Within the present framework of physics, superluminal neutrinos would be very difficult to produce. “What’s more,”Cowsik explains, “these difficulties would only increase as the pion energy increases.

There is an experimental check of Cowsik’s theoretical conclusion. CERN’s method of producing neutrinos is duplicated naturally when cosmic rays hit Earth’s atmosphere. An observatory called IceCube is set up to observe these naturally occurring neutrinos in Antarctica; as neutrinos collide with other particles, they generate muons that leave trails of light flashes as they pass through a nearly 2.5 kilometre (1.5 mile) thick block of clear ice.

A schematic image of IceCube. ICE.WISC.EDU / PETE GUEST

IceCube has detected neutrinos with energy 10,000 times higher than any generated as part of the OPERA experiment, leading Cowsik to conclude that their parent pions must have correspondingly high energy levels. His team’s calculations based on laws of the conservation of energy and momentum revealed that the lifetimes of those pions should be too long for them to decay into superluminal neutrinos.

As Cowsik explains, IceCube’s detection of high-energy neutrinos is indicative that pions do decay according to standard ideas of physics, but the neutrinos will only approach the speed of light; they will never exceed it.

Source: Pions Don’t Want to Decay into Faster the Light Neutrinos

Faster than Light Neutrinos (maybe): Field Trip!