Dark matter is the architect of large-scale cosmic structure and the engine behind proper rotation of galaxies. It’s an indispensable part of the physics of our Universe – and yet scientists still don’t know what it’s made of. The latest data from Planck suggest that the mysterious substance comprises 26.2% of the cosmos, making it nearly five and a half times more prevalent than normal, everyday matter. Now, four European researchers have hinted that they may have a discovery on their hands: a signal in x-ray light that has no known cause, and may be evidence of a long sought-after interaction between particles – namely, the annihilation of dark matter.
When astronomers want to study an object in the night sky, such as a star or galaxy, they begin by analyzing its light across all wavelengths. This allows them to visualize narrow dark lines in the object’s spectrum, called absorption lines. Absorption lines occur because a star’s or galaxy’s component elements soak up light at certain wavelengths, preventing most photons with those energies from reaching Earth. Similarly, interacting particles can also leave emission lines in a star’s or galaxy’s spectrum, bright lines that are created when excess photons are emitted via subatomic processes such as excitement and decay. By looking closely at these emission lines, scientists can usually paint a robust picture of the physics going on elsewhere in the cosmos.
But sometimes, scientists find an emission line that is more puzzling. Earlier this year, researchers at the Laboratory of Particle Physics and Cosmology (LPPC) in Switzerland and Leiden University in the Netherlands identified an excess bump of energy in x-ray light coming from both the Andromeda galaxy and the Perseus star cluster: an emission line with an energy around 3.5keV. No known process can account for this line; however, it is consistent with models of the theoretical sterile neutrino – a particle that many scientists believe is a prime candidate for dark matter.
The researchers believe that this strange emission line could result from the annihilation, or decay, of these dark matter particles, a process that is thought to release x-ray photons. In fact, the signal appeared to be strongest in the most dense regions of Andromeda and Perseus and increasingly more diffuse away from the center, a distribution that is also characteristic of dark matter. Additionally, the signal was absent from the team’s observations of deep, empty space, implying that it is real and not just instrumental artifact.
In a pre-print of their paper, the researchers are careful to stress that the signal itself is weak by scientific standards. That is, they can only be 99.994% sure that it is a true result and not just a rogue statistical fluctuation, a level of confidence that is known as 4σ. (The gold standard for a discovery in science is 5σ: a result that can be declared “true” with 99.9999% confidence) Other scientists are not so sure that dark matter is such a good explanation after all. According to predictions made based on measurements of the Lyman-alpha forest – that is, the spectral pattern of hydrogen absorption and photon emission within very distant, very old gas clouds – any particle purporting to be dark matter should have an energy above 10keV – more than twice the energy of this most recent signal.
As always, the study of cosmology is fraught with mysteries. Whether this particular emission line turns out to be evidence of a sterile neutrino (and thus of dark matter) or not, it does appear to be a signal of some physical process that scientists do not yet understand. If future observations can increase the certainty of this discovery to the 5σ level, astrophysicists will have yet another phenomena to account for – an exciting prospect, regardless of the final result.
The team’s research has been accepted to Physical Review Letters and will be published in an upcoming issue.
Imagine, if you would, a potential future for humanity… Imagine massive space-elevators lifting groups of men, women, and children skyward off Earth’s surface. These passengers are then loaded onto shuttles and ferried to the Moon where interstellar starships are docked, waiting to rocket to the stars. These humans are about to begin the greatest journey humanity has ever embarked upon, as they will be the first interstellar colonists to leave our home Solar System in order to begin populating other worlds around alien stars.
There are many things we must tackle first before we can make this type of science-fiction scene a reality. Obviously much faster methods of travel are needed, as well as some sort of incredible material that can serve to anchor the aforementioned space elevators. These are all scientific and engineering questions that humanity will need to overcome in the face of such a journey into the cosmos.
But there is one particular important feature that we can begin to tackle today: where do we point these starships? Towards which system of exoplanets are we to send our brave colonists?
Of all of the amazing things we need to discover or invent to make this scene a reality, discovering which worlds to aim our ships at is something that is actually being worked on today.
It’s an exciting era in astronomy, as astronomers are currently discovering that many of the stars that we view in the night sky have their own planets in orbit around them. Many of them are massive worlds, all orbiting at varying distances from their parent star. It is no surprise that we are discovering a vast majority of these Jupiter-sized worlds first; larger worlds are much easier to detect than the smaller worlds would be. Imagine a bright spotlight pointing at you some 500 yards away (5 football fields). Your job is to detect something the size of a period on this page that is orbiting around it that emits no light of its own. As you can see, the task would be daunting. But nevertheless, our planet hunters have been utilizing methods that enable us to accurately find these tiny specks of gas and rock despite their rather large and luminous companion suns.
However, it is not the method of finding these planets that this article is about; but rather what we do to figure out which of these worlds are worthy of our limited resources and attention. We very well cannot point those starships in random directions and just hope that they happen across an earth-sized planet that has a nitrogen-oxygen rich atmosphere with drinkable water. We need to identify which planets appear to have these mentioned characteristics before we go launching ourselves into the vast universe.
How can we do this? How is it possible that we are able to say with any level of certainty what a planet’s atmosphere is composed of when this planet is so small and so very far away? Spectroscopy is the answer, and it just might be the key to our future in the cosmos.
Just so I may illustrate how remarkable our scientific methods are for this very field of research, I will first need to show you the distances we are talking about. Let’s take Kepler 186f. This is the first planet we have discovered that is very similar to Earth. It is around 1.1 times larger than Earth and orbits within the habitable zone of its star which is very similar to our own star.
Let’s do the math, to show you just how distant this planet is. Kepler 186f is around 490 lightyears from Earth.
Kepler 186f = 490 lightyears away
Light moves at 186,282 miles/ 1 second.
186,282 mi/s x 60s/1min x 60min/1hr x 24hrs/1day x 356days/1year = 5.87 x 1012 mi/yr
Kepler 186f: 490 Lyrs x 5.87 x 1012miles/ 1 Lyr = 2.88 x 1015 miles or 2.9 QUADRILLION MILES from Earth.
Just to put this distance into perspective, let’s suppose we utilize the fastest spacecraft we have to get there. The Voyager 1 spacecraft is moving at around 38,500 mi/hr. If we left on that craft today and headed towards this possible future Earth, it would take us roughly 8.5 MILLION YEARS to get there. That’s around 34 times longer than the time between when the first proto-humans began to appear on earth 250,000 years ago until today. So the entire history of human evolution from then till now replayed 34 times BEFORE you would arrive at this planet. Knowing these numbers, how is it even possible that we can know what this planet’s atmosphere, and others like it, are made of?
First, here’s a bit of chemistry in order for you to understand the field that is spectroscopy, and then how we apply it to the astronomical sciences. Different elements are composed of a differing number of protons, neutrons, and electrons. These varying numbers are what set the elements apart from one another on the periodic table. It is the electrons, however, that are of particular interest in the majority of what chemistry studies. These different electron configurations allow for what we call spectral signatures to exist among the elements. This means that since every single element has a specific electron configuration, the light that it both absorbs and emits acts as a sort of photon fingerprint; a unique identifier to that element.
The standard equation for determining the characteristics of light is:
c= v λ
c is the speed of light in a vacuum (3.00 x 108 m/s)
v is the frequency of the light wave (in Hertz)
λ (lambda) represents the wavelength (in meters, but will usually be converted to nanometers) which will determine what color of light will be emitted from the element(s), or simply where the wavelength of light falls on the electromagnetic spectrum (infrared, visible, ultraviolet, etc.)
If you have either the frequency or the wavelength, you can determine the rest. You can even start with the energy of the light being detected by your instruments and then work backwards with the following equations:
The energy of a photon can be described mathematically as this:
Ephoton = hv
OR
Ephoton = h c / λ
What these mean is that the energy of a photon is the product of the frequency (v) of the light wave emitted multiplied by Planck’s Constant (h), which is 6.63 x 10-34 Joules x seconds. Or in the case of the second equation, the energy of the photon is equal to Planck’s Constant x the speed of light divided by the wavelength. This will give you the amount of energy that a specific wavelength of light contains. This equation is also known as the Planck-Einstein Relation. So, if you take a measurement and you are given a specific energy reading of the light coming from a distant star, you can then deduce what information you need about said light and determine which element(s) are either emitting or absorbing these wavelengths. It’s all mathematical detective work.
So, the electrons that orbit around the nucleus of atoms exist in what we call orbitals. Depending on the atom (and the electrons associated with it), there are many different orbitals. You have the “ground” orbital for the electron, which means that the electron(s) there are closest to the nucleus. They are “non-excited”. However, there are “higher” quantum orbitals that exist that the electron(s) can “jump” to when the atom is excited. Each orbital can have different quantum number values associated with it. The main value we will use is the Principle Quantum Number. This is denoted by the letter “n”, and has an assigned integer value of 1, 2, 3, etc. The higher the number, the further from the nucleus the electron resides, and the more energy is associated with it. This is best described with an example:
A hydrogen atom has 1 electron. That electron is whipping around its 1 proton nucleus in its ground state orbital. Suddenly, a burst of high energy light hits the hydrogen. This energy is transferred throughout the hydrogen atom, and the electron reacts. The electron will instantaneously “vanish” from the n1 orbital and then reappear on a higher quantum orbital (say n4). This means that as that light wave passed over this hydrogen atom, a specific wavelength was absorbed by the hydrogen (this is an important feature to remember for later).
Eventually, the “excited” electron will drop from its higher quantum orbital (n4) back down to the n1 orbital. When this happens, a specific wavelength of light is emitted by the hydrogen atom. When the electron “drops”, it emits a photon of specific energy or wavelength (dependent upon many factors, including the state the electron was in prior to its “excitement”, the amount of levels the electron dropped, etc.) We can then measure this energy (or wavelength, or frequency,) to determine what element the photon is coming from (in this case, hydrogen). It is in this feature that each element has its own light signature. Each atom can absorb and emit specific wavelengths of light, and they are all tied together by the equations listed above.
So how does this all work? Well, in reality, there are many factors that go into this sort of astronomical study. I am simply describing the basic principle behind the work. I say this so that the many scientists that are doing this sort of work do not feel as though I have discredited their research and hard work; I promise you, it is painstakingly difficult and tedious and involves many more details that I am not mentioning here. That being said, the basic concept works like this:
We find a star that gives off the telltale signs that it has a planet orbiting around it. We do this with a few methods, but how it all first started was by detecting a “wobble” in the star’s apparent position. This “wobble” is caused by a planet orbiting around its parent star. You see, when a planet orbits a star (and when anything orbits anything else), the planet isn’t really orbiting the star, the planet AND the star are orbiting a common focal point. Usually with this type of orbital system, that common focal point is fairly close to the center of the star, and thus it’s safe to say that the planet orbits the star. However, this causes the star to move ever so slightly. We can measure this.
Once we determine that there are planets orbiting the star in question, we can study it more closely. When we do, we turn our instruments towards it and begin taking highly detailed measurements, and then we wait. What we are waiting for is a dimming of the star at a regular interval. What we are hoping for is this newly-found exoplanet to transit our selected star. When a planet transits a star, it moves in front of the star relative to us (this also means we are incredibly lucky, as not all planets will orbit “in front” of the star relative to our view). This will cause the star’s brightness to dip ever so slightly at a regular interval. Now we have identified a prime exoplanet candidate for study.
We can now introduce the spectroscopic principles to this hunt. We can take all sorts of measurements of the light that is coming from this star. Its brightness, the energy it’s kicking out per second, and even what that star is made of (the emission spectrum I discussed earlier). Then what we do is wait for the planet to transit the start, and begin taking readings. What we are doing is reading the light passing THROUGH the exoplanet’s atmosphere, and then studying what we can call an Absorption Spectrum reading. As I mentioned earlier, specific elements will absorb specific wavelengths of light. What we get back is a spectral reading of the star’s light signature (the emission spectra of the star), but with missing wavelengths that show up as very tiny black lines where there used to be color. These are called Fraunhofer lines, named after the “father” of astrophysics Joseph Fraunhofer, who discovered these lines in the 19th century.
What we now have in our possession is a chemical fingerprint of what this exoplanet’s atmosphere is composed of. The star’s spectrum is splayed out before us, but the barcode of the planet’s atmospheric composition lay within the light. We can then take those wavelengths that are missing and compare them to the already established absorption/emission spectra of all of the known elements. In this way, we can begin to piece together what this planet has to offer us. If we get high readings of sulfur and hydrogen, we have probably just discovered a gas giant. However if we discover a good amount of nitrogen and oxygen, we may have found a world that has liquid water on its surface (provided that this planet resides within its host star’s “habitable” zone: a distance that is just far enough from the star to allow for liquid water). If we find a planet that has carbon dioxide in its atmosphere, we may just have discovered alien life (CO2 being a waste product of both cellular respiration and a lot of industrial processes, but it can also be a product of volcanism and other non-organic phenomena).
What this all means is that by being able to read the light from any given object, we can narrow our search for the next Earth. Regardless of distance, if we can obtain an accurate measurement of the light moving through an exoplanet’s atmosphere, we can tell what it is made of.
We have discovered some 2000 exoplanets thus far, and that number will only increase in the coming decades. With so many candidates, it will be a wonder if we do not find a planet that we humans can live on without the help of technology. Obviously our techniques will further be refined, and as new technologies, methods, and instruments become available, our ability to pinpoint planets that we can someday colonize will become increasingly more accurate.
With such telescopes like the James Webb Space Telescope launching soon, we will be able to image these exoplanets and get even better spectroscopic readings from them. This type of science is on the leading edge of humanity’s journey into the cosmos. Astrophysicists and astrochemists that work in this field are the necessary precursors to the brave men and women who will one day board those interstellar spacecraft and launch our civilization into the Universe to truly become an interstellar species.
Atomic theory – that is, the belief that all matter is composed of tiny, indivisible elements – has very deep roots. Initially, the theory appeared in thousands of years ago in Greek and Indian texts as a philosophical idea. However, it was not embraced scientifically until the 19th century, when an evidence-based approach began to reveal what the atomic model looked like.
It was at this time that John Dalton, an English chemist, meteorologist and physicist, began a series of experiments which would culminate in him proposing the theory of atomic compositions – which thereafter would be known as Dalton’s Atomic Theory – that would become one of the cornerstones of modern physics and chemistry.
The highly anticipated film “Interstellar” is based on science and theory; from wormholes, to the push-pull of gravity on a planet, to the way a black hole might re-adjust your concept of time. But just how much of the movie is really true to what we know about the Universe? There has also been some discussion whether the physics used for the visual effects in the movie actually was good enough to produce some science. But how much of it is just creative license?
Today, (Wed. November 26) at 19:00 UTC (3 pm EDT, 12:00 pm PDT), the Kavli foundation hosts a live discussion with three astrophysicists who will answer viewers’ questions about black holes, relativity and gravity, to separate the movie’s science facts from its science fiction.
According to the Kavli twitter feed, the Hangout will even help you understand what in the world happened at the end of the movie!
Scientists Mandeep Gill, Eric Miller and Hardip Sanghera will answer your questions in the live Google Hangout.
Submit questions ahead of and during the webcast by emailing [email protected] or by using the hashtag #KavliSciBlog on Twitter or Google+.
A black hole is an extraordinarily massive, improbably dense knot of spacetime that makes a living swallowing or slinging away any morsel of energy that strays too close to its dark, twisted core. Anyone fortunate (or unfortunate) enough to directly observe one of these beasts in the wild would immediately notice the way its colossal gravitational field warps all of the light from the stars and galaxies behind it, a phenomenon known as gravitational lensing.
Thanks to the power of supercomputers, a curious observer no longer has to venture into outer space to see such a sight. A team of astronomers has released their first simulated images of the lensing effects of not just one, but two black holes, trapped in orbit by each other’s gravity and ultimately doomed to merge as one.
Astronomers have been able to model the gravitational effects of a single black hole since the 1970s, but the imposing mathematics of general relativity made doing so for a double black-hole system a much larger challenge. Over the last ten years, however, scientists have improved the accuracy of computer models that deal with these types of calculations in an effort to match observations from gravitational wave detectors like LIGO and VIRGO.
The research collaboration Simulating Extreme Spacetimes (SXS) has begun using these models to mimic the lensing effects of high-gravity systems involving objects such as neutron stars and black holes. In their most recent paper, the team imagines a camera pointing at a binary black hole system against a backdrop of the stars and dust of the Milky Way. One way to figure out what the camera would see in this situation would be to use general relativity to compute the path of each photon traveling from every light source at all points within the frame. This method, however, involves a nearly impossible number of calculations. So instead, the researchers worked backwards, mapping only those photons that would reach the camera and result in a bright spot on the final image – that is, photons that would not be swallowed by either of the black holes.
As you can see in the image above, the team’s simulations testify to the enormous effect that these black holes have on the fabric of spacetime. Ambient photons curl into a ring around the converging binaries in a process known as frame dragging. Background objects appear to multiply on opposite sides of the merger (for instance, the yellow and blue pair of stars in the “northeast” and the “southwest” areas of the ring). Light from behind the camera is even pulled into the frame by the black holes’ mammoth combined gravitational field. And each black hole distorts the appearance of the other, pinching off curved, comma-shaped regions of shadow called “eyebrows.” If you could zoom in with unlimited precision, you would find that there are, in fact, an infinite number of these eyebrows, each smaller than the last, like a cosmic set of Russian dolls.
In case you thought things couldn’t get any more amazing, SXS has also created two videos of the black hole merger: one simulated from above, and the other edge-on.
The SXS collaboration will continue to investigate gravitationally ponderous objects like black holes and neutron stars in an effort to better understand their astronomical and physical properties. Their work will also assist observational scientists as they search the skies for evidence of gravitational waves.
Check out the team’s ArXiv paper describing this work and their website for even more fascinating images.
At the Large Hadron Collider (LHC) in Europe, faster is better. Faster means more powerful particle collisions and looking deeper into the makeup of matter. However, other researchers are proclaiming not so fast. LHC may not have discovered the Higgs Boson, the boson that imparts mass to everything, the god particle as some have called it. While the Higgs Boson discovery in 2012 culminated with the awarding in December 2013 of the Nobel Prize to Peter Higgs and François Englert, a team of researchers has raised these doubts about the Higgs Boson in their paper published in the journal Physical Review D.
The discourse is similar to what unfolded in the last year with the detection of light from the beginning of time that signified the Inflation epoch of the Universe. Researchers looking into the depths of the Universe and the inner depths of subatomic particles are searching for signals at the edge of detectability, just above the noise level and in proximity to the signals from other sources. For the BICEP2 telescope observations (previous U.T. articles), its pretty much back to the drawing board but the Higgs Boson (previous U.T. articles) doubts are definitely challenging but needing more solid evidence. In human affairs, if the Higgs Boson was not detected by the LHC, what does one do with an awarded Nobel Prize?
The present challenge to the Higgs Boson is not new and is not just a problem of detectability and acuity of the sensors as is the case with BICEP2 data. The Planck space telescope revealed that light radiated from dust combined with the magnetic field in our Milky Way galaxy could explain the signal detected by BICEP2 that researchers proclaimed as the primordial signature of the Inflation period. The Higgs Boson particle is actually a prediction of the theory proposed by Peter Higgs and several others beginning in the early 1960s. It is a predicted particle from gauge theory developed by Higgs, Englert and others, at the heart of the Standard Model.
This recent paper is from a team of researchers from Denmark, Belgium and the United Kingdom led by Dr. Mads Toudal Frandsen. Their study entitled, “Technicolor Higgs boson in the light of LHC data” discusses how their supported theory predicts Technicolor quarks through a range of energies detectable at LHC and that one in particular is within the uncertainty level of the data point declared to be the Higgs Boson. There are variants of Technicolor Theory (TC) and the research paper compares in detail the field theory behind the Standard Model Higgs and the TC Higgs (their version of the Higgs boson). Their conclusion is that a TC Higgs is predicted by Technicolor Theory that is consistent with expected physical properties, is low mass and has an energy level – 125 GeV – indistinguishable from the resonance now considered to be the Standard Model Higgs. Theirs is a composite particle and it does not impart mass upon everything.
So you say – hold on! What is a Technicolor in jargon of particle physics? To answer this you would want to talk to a plumber from South Bronx, New York – Dr. Leonard Susskind. Though no longer a plumber, Susskind first proposed Technicolor to describe the breaking of symmetry in gauge theories that are part of the Standard Model. Susskind and other physicists from the 1970s considered it unsatisfactory that many arbitrary parameters were needed to complete the Gauge theory used in the Standard Model (involving the Higgs Scalar and Higgs Field). The parameters consequently defined the mass of elementary particles and other properties. These parameters were being assigned and not calculated and that was not acceptable to Susskind, ‘t Hooft, Veltmann and others. The solution involved the concept of Technicolor which provided a “natural” means of describing the breakdown of symmetry in the gauge theories that makeup the Standard Model.
Technicolor in particle physics shares one simple thing in common with Technicolor that dominated the early color film industry – the term composite in creating color or particles.
If the theory surrounding Technicolor is correct, then there should be many techni-quark and techni-Higgs particles to be found with the LHC or a more powerful next generation accelerator; a veritable zoo of particles besides just the Higgs Boson. The theory also means that these ‘elementary’ particles are composites of smaller particles and that another force of nature would be needed to bind them. And this new paper by Belyaev, Brown, Froadi and Frandsen claims that one specific techni-quark particle has a resonance (detection point) that is within the uncertainty of measurements for the Higgs Boson. In other words, the Higgs Boson might not be “the god particle” but rather a Technicolor Quark particle comprised of smaller more fundamental particles and another force binding them.
This paper by Belyaev, Brown, Froadi and Frandsen is a clear reminder that the Standard Model is unsettled and that even the discovery of the Higgs Boson is not 100% certain. In the last year, more sensitive sensors have been integrated into CERN’s LHC which will help refute this challenge to Higgs theory – Higgs Scalar and Field, the Higgs Boson or may reveal the signatures of Technicolor particles. Better detectors may resolve the difference between the energy level of the Technicolor quark and the Higgs Boson. LHC researchers were quick to state that their work moves on beyond discovery of the Higgs Boson. Also, their work could actually disprove that they found the Higgs Boson.
Contacting the co-investigator Dr. Alexander Belyaev, the question was raised – will the recent upgrades to CERN accelerator provide the precision needed to differentiate a technie-Quark from the Higg’s particle?
“There is no guarantee of course” Dr. Belyaev responded to Universe Today, “but upgrade of LHC will definitely provide much better potential to discover other particles associated with theory of Technicolor, such as heavy Techni-mesons or Techni-baryons.”
Resolving the doubts and choosing the right additions to the Standard Model does depend on better detectors, more observations and collisions at higher energies. Presently, the LHC is down to increase collision energies from 8 TeV to 13 TeV. Among the observations at the LHC, Super-symmetry has not fared well and the observations including the Higgs Boson discovery has supported the Standard Model. The weakness of the Standard Model of particle physics is that it does not explain the gravitational force of nature whereas Super-symmetry can. The theory of Technicolor maintains strong supporters as this latest paper shows and it leaves some doubt that the Higgs Boson was actually detected. Ultimately another more powerful next-generation particle accelerator may be needed.
For Higgs and Englert, the reversal of the discovery is by no means the ruination of a life’s work or would be the dismissal of a Nobel Prize. The theoretical work of the physicists have long been recognized by previous awards. The Standard Model as, at least, a partial solution of the theory of everything is like a jig-saw puzzle. Piece by piece is how it is being developed but not without missteps. Furthermore, the pieces added to the Standard Model can be like a house of cards and require replacing a larger solution with a wholly other one. This could be the case of Higgs and Technicolor.
At times like children somewhat determined, physicists thrust a solution into the unfolding puzzle that seems to fit but ultimately has to be retracted. The present discourse does not yet warrant a retraction. Elegance and simplicity is the ultimate characteristics sought in theoretical solutions. Particle physicists also use the term Naturalnesswhen describing the concerns with gauge theory parameters. The solutions – the pieces – of the puzzle created by Peter Higgs and François Englert have spearheaded and encouraged further work which will achieve a sounder Standard Model but few if any claim that it will emerge as the theory of everything.
With its first runs of colliding protons in 2008-2013, the Large Hadron Collider has now been providing a stream of experimental data that scientists rely on to test predictions arising out of particle and high-energy physics. In fact, today CERN made public the first data produced by LHC experiments. And with each passing day, new information is released that is helping to shed light on some of the deeper mysteries of the universe.
This week, for example, CERN announced the discovery two new subatomic particles that are part of the baryon family. The particles, known as the Xi_b’– and Xi_b*–, were discovered thanks to the efforts of the LHCb experiment – an international collaboration involving roughly 750 scientists from around the world.
The existence of these particles was predicted by the quark model, but had never been seen before. What’s more, their discovery could help scientists to further confirm the Standard Model of particle physics, which is considered virtually unassailable now thanks to the discovery of the Higgs Boson.
Like the well-known protons that the LHC accelerates, the new particles are baryons made from three quarks bound together by the strong force. The types of quarks are different, though: the new X_ib particles both contain one beauty (b), one strange (s), and one down (d) quark. Thanks to the heavyweight b quarks, they are more than six times as massive as the proton.
However, their mass also depends on how they are configured. Each of the quarks has an attribute called “spin”; and in the Xi_b’– state, the spins of the two lighter quarks point in the opposite direction to the b quark, whereas in the Xi_b*– state they are aligned. This difference makes the Xi_b*– a little heavier.
“Nature was kind and gave us two particles for the price of one,” said Matthew Charles of the CNRS’s LPNHE laboratory at Paris VI University. “The Xi_b’– is very close in mass to the sum of its decay products: if it had been just a little lighter, we wouldn’t have seen it at all using the decay signature that we were looking for.”
“This is a very exciting result,” said Steven Blusk from Syracuse University in New York. “Thanks to LHCb’s excellent hadron identification, which is unique among the LHC experiments, we were able to separate a very clean and strong signal from the background,” “It demonstrates once again the sensitivity and how precise the LHCb detector is.”
Blusk and Charles jointly analyzed the data that led to this discovery. The existence of the two new baryons had been predicted in 2009 by Canadian particle physicists Randy Lewis of York University and Richard Woloshyn of the TRIUMF, Canada’s national particle physics lab in Vancouver.
As well as the masses of these particles, the research team studied their relative production rates, their widths – a measure of how unstable they are – and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).
QCD is part of the Standard Model of particle physics, the theory that describes the fundamental particles of matter, how they interact, and the forces between them. Testing QCD at high precision is a key to refining our understanding of quark dynamics, models of which are tremendously difficult to calculate.
“If we want to find new physics beyond the Standard Model, we need first to have a sharp picture,” said LHCb’s physics coordinator Patrick Koppenburg from Nikhef Institute in Amsterdam. “Such high precision studies will help us to differentiate between Standard Model effects and anything new or unexpected in the future.”
The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared – after its first long shutdown – to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.
The research was published online yesterday on the physics preprint server arXiv and have been submitted to the scientific journal Physical Review Letters.
All the physical properties of our Universe – indeed, the fact that we even exist within a Universe that we can contemplate and explore – owe to events that occurred very early in its history. Cosmologists believe that our Universe looks the way it does thanks to a rapid period of inflation immediately before the Big Bang that smoothed fluctuations in the vacuum energy of space and flattened out the fabric of the cosmos itself.
According to current theories, however, interactions between the famed Higgs boson and the inflationary field should have caused the nascent Universe to collapse. Clearly, this didn’t happen. So what is going on? Scientists have worked out a new theory: It was gravity that (literally) held it all together.
The interaction between the curvature of spacetime (more commonly known as gravity) and the Higgs field has never been well understood. Resolving the apparent problem of our Universe’s stubborn existence, however, provides a good excuse to do some investigating. In a paper published this week in Physical Review Letters, researchers from the University of Copenhagen, the University of Helsinki, and Imperial College London show that even a small interaction between gravity and the Higgs would have been sufficient to stave off a collapse of the early cosmos.
The researchers modified the Higgs equations to include the effect of gravity generated by UV-scale energies. These corrections were found to stabilize the inflationary vacuum at all but a narrow range of energies, allowing expansion to continue and the Universe as we know it to exist… without the need for new physics beyond the Standard Model.
This new theory is based on the controversial evidence of inflation announced by BICEP2 earlier this summer, so its true applicability will depend on whether or not those results turn out to be real. Until then, the researchers are hoping to support their work with additional observational studies that seek out gravitational waves and more deeply examine the cosmic microwave background.
At this juncture, the Higgs-gravity interaction is not a testable hypothesis because the graviton (the particle that handles all of gravity’s interactions) itself has yet to be detected. Based purely on the mathematics, however, the new theory presents an elegant and efficient solution to the potential conundrum of why we exist at all.
Tune in to the song of Comet Churyumov-Gerasimenko
Scientists can’t figure exactly why yet, but Comet 67P/Churyumov-Gerasimenko has been singing since at least August. Listen to the video – what do you think? I hear a patter that sounds like frogs, purring and ping-pong balls. The song is being sung at a frequency of 40-50 millihertz, much lower than the 20 hertz – 20 kilohertz range of human hearing. Rosetta’s magnetometer experiment first clearly picked up the sounds in August, when the spacecraft drew to within 62 miles (100 km) of the comet. To make them audible Rosetta scientists increased their pitch 10,000 times.
The sounds are thought to be oscillations in the magnetic field around the comet. They were picked up by the Rosetta Plasma Consortium, a suite of five instruments on the spacecraft devoted to observing interactions between the solar plasma and the comet’s tenuous coma as well as the physical properties of the nucleus. A far cry from the stuff you donate at the local plasma center, plasma in physics is an ionized gas. Ionized means the atoms in the gas have lost or gained an electron through heating or collisions to become positively or negatively charged ions. Common forms of plasma include the electric glow of neon signs, lightning and of course the Sun itself.
Having lost their neutrality, electric and magnetic fields can now affect the motion of particles in the plasma. Likewise, moving electrified particles affect the very magnetic field controlling them.
Scientists think that neutral gas particles from vaporizing ice shot into the coma become ionized under the action of ultraviolet light from the Sun. While the exact mechanism that creates the curious oscillations is still unknown, it might have something to do with the electrified atoms or ions interacting with the magnetic fields bundled with the Sun’s everyday outpouring of plasma called the solar wind. It’s long been known that a comet’s electrified or ionized gases present an obstacle to the solar wind, causing it to drape around the nucleus and shape the streamlined blue-tinted ion or gas tail.
“This is exciting because it is completely new to us. We did not expect this, and we are still working to understand the physics of what is happening,” said Karl-Heinz Glassmeier, head of Space Physics and Space Sensorics at the Technical University of Braunschweig, Germany.
While 67P C-G’s song probably won’t make the Top 40, we might listen to it just as we would any other piece of music to learn what message is being communicated.
Quick: how do you aim an instrument at the Sun from a moving rocket on a fifteen minute suborbital flight?
The answer is very carefully, and NASA plans to do just that today, Thursday, November 6th as the Rapid Acquisition Imaging Spectrograph Experiment, also known as RAISE, takes to the skies over White Sands, New Mexico, to briefly study the Sun.
Capturing five images per second, RAISE is expected to gather over 1,500 images during five minutes of data collection near apogee.
Why use sub-orbital sounding rockets to do observations of the Sun? Don’t we already have an armada of space and ground-based instruments to accomplish this that stare at our nearest star around the clock? Well, it turns out that sounding rockets are still cost-effective means of testing and demonstrating new technologies.
“Even on a five-minute flight, there are niche areas of science we can focus on well,” said solar scientist Don Hassler of the Southwest Research Institute in Boulder, Colorado in a recent press release. “There are areas of the Sun that need to be examined with the high-cadence observations that we can provide.”
Indeed, there’s a long history of studying the Sun by use of high-altitude sounding rockets, starting with the detection of solar X-rays by a detector placed in a captured V-2 rocket launched from White Sands in 1949.
RAISE will actually scrutinize an active region of the Sun turned Earthward during its brief flight to create what’s known as a spectrogram, or an analysis of solar activity at differing wavelengths. This gives scientists a three dimensional layered snapshot of solar activity, as different wavelengths correspond to varying velocities of solar material and wavelengths. Think of looking at layers of cake. This, in turn, paints a picture of how material is circulated and moved around the surface of the Sun.
This will be RAISE’s second flight, and this week’s launch will sport a brand new diffraction grating coated with boron carbide to enhance wavelength analysis. RAISE will also look at the Sun in the extreme ultraviolet which cannot penetrate the Earth’s lower atmosphere. Technology pioneered by missions such as RAISE may also make its way into space permanently on future missions, such as the planned European Space Agency and NASA joint Solar Orbiter Mission, set for launch in 2017. The Solar Orbit Mission will study the Sun close up and personal, journeying only 26 million miles or 43 million kilometres from its surface, well inside the perihelion of the planet Mercury.
“This is the second time we have flown a RAISE payload, and we keep improving it along the way,” Hassler continued. “This is a technology that is maturing relatively quickly.”
As you can imagine, RAISE relies on clear weather for a window to launch. RAISE was scrubbed for launch on November 3rd, and the current window for launch is set for 2:07 PM EST/19:07 Universal Time, which is 12:07 PM MST local time at White Sands. Unlike the suborbital launches from Wallops Island, the White Sands launches aren’t generally carried live, though they tend to shut down US highway 70 between Las Cruces and Alamogordo that bisects White Sands just prior to launch.
Currently, the largest sunspot turned forward towards the Earth is active region 2205.
Another recent mission lofted by a sounding rocket to observe the Sun dubbed Hi-C was highly successful during its short flight in 2013.
RAISE will fly on a Black Brant sounding rocket, which typically reaches an apogee of 180 miles or 300 kilometres.
Unfortunately, the massive sunspot region AR2192 is currently turned away from the Earth and will effectively be out of RAISE’s view. The largest in over a decade, the Jupiter sized sunspot wowed viewers of the final solar eclipse of 2014 just last month. This large sunspot group will most likely survive its solar farside journey and reappear around the limb of the Sun sometime after November 9th, good news if RAISE is indeed scrubbed today due to weather.
And our current solar cycle has been a very schizophrenic one indeed. After a sputtering start, solar cycle #24 has been anemic at best, with the Sun struggling to come out of a profound minimum, the likes of which hasn’t been seen in over a century. And although October 2014 produced a Jupiter-sized sunspot that was easily seen with eclipse glasses, you wouldn’t know that we’ve passed a solar maximum from looking at the Sun now. In fact, there’s been talk among solar astronomers that solar cycle #25 may be even weaker, or absent all together.
All this makes for fascinating times to study our sometimes strange star. RAISE observations will also be coordinated with views from the Solar Dynamics Observatory and the joint NASA-JAXA Hinode satellites in Earth orbit. We’ll also be at White Sands National Park today, hoping the get a brief view of RAISE as it briefly touches space.