Europa Life: Could ‘Extreme Shrimp’ Point To Microbes On That Moon?

This is a type of shrimp that lives in hydrothermal vents (areas of hot water) in the Caribbean. NASA is studying Rimicaris hybisae and other "extreme shrimp" to learn more about lifeforms that could survive on other worlds. Credit: Chris German, WHOI/NSF, NASA/ROV Jason C: 2012 Woods

For all of the talk about aliens that we see in science fiction, the reality is in our Solar System, any extraterrestrial life is likely to be microbial. The lucky thing for us is there are an abundance of places that we can search for them — not least Europa, an icy moon of Jupiter believed to harbor a global ocean and that NASA wants to visit fairly soon. What lurks in those waters?

To gain a better understanding of the extremes of life, scientists regularly look at bacteria and other lifeforms here on Earth that can make their living in hazardous spots. One recent line of research involves shrimp that live in almost the same area as bacteria that survive in vents of up to 750 degrees Fahrenheit (400 degrees Celsius) — way beyond the boiling point, but still hospitable to life.

Far from sunlight, the bacteria receive their energy from chemical combinations (specifically, hydrogen sulfide). While the shrimp certainly don’t live in these hostile areas, they perch just at the edge — about an inch away. The shrimp feed on the bacteria, which in turn feed on the hydrogen sulfide (which is toxic to larger organisms if there is enough of it.) Oh, and by the way, some of the shrimps are likely cannibals!

One species called Rimicaris hybisae, according to the evidence, likely feeds on each other. This happens in areas where the bacteria are not as abundant and the organisms need to find some food to survive. To be sure, nobody saw the shrimps munching on each other, but scientists did find small crustaceans inside them — and there are few other types of crustaceans in the area.

But how likely, really, are these organisms on Europa? Bacteria might be plausible, but something larger and more complicated? The researchers say this all depends on how much energy the ecosystems have to offer. And in order to see up close, we’d have to get underwater somehow and do some exploring.

In a recent Universe Today interview with Mike Brown, a professor of planetary science at the California Institute of Technology, the renowned dwarf-planet hunter talked about how a submarine could do some neat work.

“In the proposed missions that I’ve heard, and in the only one that seems semi-viable, you land on the surface with basically a big nuclear pile, and you melt your way down through the ice and eventually you get down into the water,” he said. “Then you set your robotic submarine free and it goes around and swims with the big Europa whales.” You can see the rest of that interview here.

Source: Jet Propulsion Laboratory

The puzzling, fascinating surface of Jupiter's icy moon Europa looms large in this newly-reprocessed color view, made from images taken by NASA's Galileo spacecraft in the late 1990s. Image credit: NASA/JPL-Caltech/SETI Institute
The puzzling, fascinating surface of Jupiter’s icy moon Europa looms large in this newly-reprocessed color view, made from images taken by NASA’s Galileo spacecraft in the late 1990s. Image credit: NASA/JPL-Caltech/SETI Institute

DNA Won’t Be Killed Dead By A Rocket Ride To Space, Study Suggests

Launch of the TEXUS-51 sounding rocket that included plasmid DNA on the exterior of the rocket. A November 2014 study based on the flight suggests DNA could survive a suborbital spaceflight. Credit: Adrian Mettauer

So how ’bout those planetary protection agreements? Turns out that plasmid DNA — the kind that exists in bacterial cells  — may be able to survive a rocket trip to space, based on research with an engineered version. And if life’s building blocks can get there, perhaps they can even go beyond. The International Space Station? Mars?

This information comes from a single peer-reviewed study based on a sounding rocket that went into suborbital space in March 2011. Called TEXUS-49, its payload included artificial plasmid DNA that had both a fluorescent marker and an antibiotic resistance gene.

Even in the 13-minute flight, temperatures on the rocket exterior soared to 1,000 degrees Celsius (1,832 degrees Fahrenheit.) And remarkably, the DNA survived.

While we talk about Earth having carbon-based life forms, the coding parts of DNA are nucleotides - with a carbon content of zero. Credit: NASA (adapted image).
While we talk about Earth having carbon-based life forms, the coding parts of DNA are nucleotides – with a carbon content of zero. Credit: NASA (adapted image).

Not all of the DNA was working properly, though. Up to 35% of it had its “full biological function”, researchers stated, specifically in terms of helping bacteria with antibiotic resistance and encouraging the fluorescent marker to express itself in eukaryotic cells, the cell type found in animals and plants.

The next step, naturally, would be to test this theory with more flights, the authors suggest. But interestingly enough, DNA survival wasn’t even the intended goal of the original study, even though there are stories of simple life surviving for a time in space, such as spores on the exterior of the International Space Station shown in the image below.

Images of Bacillus pumilus SAFR-032 spores (seen in an electron micrograph) on aluminum before and after being exposed to space on an International Space Station experiment. Credit: P. Vaishampayan, et al./Astrobiology
Images of Bacillus pumilus SAFR-032 spores (seen in an electron micrograph) on aluminum before and after being exposed to space on an International Space Station experiment. Credit: P. Vaishampayan, et al./Astrobiology

“We were totally surprised. Originally, we designed this experiment as a technology test for biomarker stability during spaceflight and re-entry,” the authors wrote in a statement for PLOS.

“We never expected to recover so many intact and functional active DNA. But it is not only an issue from space to Earth, it is also an issue from Earth to space and to other planets: Our findings made us a little bit worried about the probability of contaminating spacecrafts, landers and landing sites with DNA from Earth.”

You can read more about the study in the journal PLOS One. The research was led by the University of Zurich’s Cora Thiel.

Source: PLOS

Philae’s Wild Comet Landing: Crater Grazing, Spinning And Landing In Parts Unknown

Philae landed nearly vertically on its side with one leg up in outer space. Here we see it in relation to the panoramic photos taken with the CIVA cameras. Credit: ESA

No, scientists haven’t found Philae yet. But as they churn through the scientific data on the comet lander, more information is emerging about the crazy landing last month that included three touchdowns and an incredible two hours of drifting before Philae came to rest in a relatively shady spot on the surface.

Among the latest: the tumbling spacecraft “collided with a surface feature” shortly after its first landing, perhaps grazing a crater rim with one of its legs. This information comes from an instrument called ROMAP (Rosetta Lander Magnetometer and Plasma Monitor) that monitors magnetic fields. The instrument is now being used to track down the spacecraft.

ROMAP’s usual role is to look at the comet’s magnetic field as it interacts with the solar wind, but the challenge is the orbiter (Rosetta) and lander both create tiny ones of their own due to the magnetic circuitry. Usually this data is removed to see what the comet’s environment is like. But during the landing, ROMAP was used to track Philae’s descent.

Four images of Comet 67P/Churyumov–Gerasimenko taken on Nov. 30, 2014 by the orbiting Rosetta spacecraft. Credit: ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0
Four images of Comet 67P/Churyumov–Gerasimenko taken on Nov. 30, 2014 by the orbiting Rosetta spacecraft. Credit: ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0

Philae was supposed to fire harpoons to secure itself to the surface when it touched down at 3:34 p.m. UTC (10:34 a.m. EST) Nov. 12, but the mechanism failed. ROMAP’s data then shows the spin rate increasing, with the lander turning at one rotation every 13 seconds.

The grazing collision happened at 4:20 pm. UTC (11:20 a.m. EST), making the rotation decrease to once every 24 seconds. Then the final two touchdowns happened around 5:25 p.m. UTC (12:25 p.m. EST) and 5:31 p.m. UTC (12:31 p.m. EST). Controllers hope they can figure out exactly where Philae arrived once they look at data from ROMAP, CONSERT and other instruments on the lander.

Philae is now hibernating because there isn’t enough sunlight in its landing spot to recharge its battery through the solar panels. Rosetta, meanwhile, continues orbiting 67P and sending back pictures of the comet as it draws closer to the Sun, including the image you see further up in this blog post, released today (Dec. 2) a few days after it was taken in space.

Source: European Space Agency

Shooting “Color” in the Blackness of Space

A beautiful image of Sasturns tiny moon Daphnis, but where is all the color?

If NASA is so advanced, why are their pictures in black and white?

It’s a question that I’ve heard, in one form or another, for almost as long as I’ve been talking with the public about space. And, to be fair, it’s not a terrible inquiry. After all, the smartphone in my pocket can shoot something like ten high-resolution color images every second. It can automatically stitch them into a panorama, correct their color, and adjust their sharpness. All that for just a few hundred bucks, so why can’t our billion-dollar robots do the same?

The answer, it turns out, brings us to the intersection of science and the laws of nature. Let’s take a peek into what it takes to make a great space image…

Perhaps the one thing that people most underestimate about space exploration is the time it takes to execute a mission. Take Cassini, for example. It arrived at Saturn back in 2004 for a planned four-year mission. The journey to Saturn, however, is about seven years, meaning that the spacecraft launched way back in 1997. And planning for it? Instrument designs were being developed in the mid-1980s! So, when you next see an astonishing image of Titan or the rings here at Universe Today, remember that the camera taking those shots is using technology that’s almost 30 years old. That’s pretty amazing, if you ask me.

But even back in the 1980s, the technology to create color cameras had been developed. Mission designers simply choose not to use it, and they had a couple of great reasons for making that decision.

Perhaps the most practical reason is that color cameras simply don’t collect as much light. Each “pixel” on your smartphone sensor is really made up of four individual detectors: one red, one blue, two green (human eyes are more sensitive to green!). The camera’s software combines the values of those detectors into the final color value for a given pixel. But, what happens when a green photon hits a red detector? Nothing, and therein lies the problem. Color sensors only collect a fraction of the incoming light; the rest is simply lost information. That’s fine here on Earth, where light is more or less spewing everywhere at all times. But, the intensity of light follows one of those pesky inverse-square laws in physics, meaning that doubling your distance from a light source results in it looking only a quarter as bright.

That means that spacecraft orbiting Jupiter, which is about five times farther from the Sun than is the Earth, see only four percent as much light as we do. And Cassini at Saturn sees the Sun as one hundred times fainter than you or I. To make a good, clear image, space cameras need to make use of all the little light available to them, which means making do without those fancy color pixels.

A mosaic of images through different filters on NASA's Solar Dynamics Observatory. Image credit: NASA/SDO/Goddard Space Flight Center
A mosaic of images through different filters on NASA’s Solar Dynamics Observatory. Image credit: NASA/SDO/Goddard Space Flight Center

The darkness of the solar system isn’t the only reason to avoid using a color camera. To the astronomer, light is everything. It’s essentially our only tool for understanding vast tracts of the Universe and so we must treat it carefully and glean from it every possible scrap of information. A red-blue-green color scheme like the one used in most cameras today is a blunt tool, splitting light up into just those three categories. What astronomers want is a scalpel, capable of discerning just how red, green, or blue the light is. But we can’t build a camera that has red, orange, yellow, green, blue, and violet pixels – that would do even worse in low light!

Instead, we use filters to test for light of very particular colors that are of interest scientifically. Some colors are so important that astronomers have given them particular names; H-alpha, for example, is a brilliant hue of red that marks the location of hydrogen throughout the galaxy. By placing an H-alpha filter in front of the camera, we can see exactly where hydrogen is located in the image – useful! With filters, we can really pack in the colors. The Hubble Space Telescope’s Advanced Camera for Surveys, for example, carries with it 38 different filters for a vast array of tasks. But each image taken still looks grayscale, since we only have one bit of color information.

At this point, you’re probably saying to yourself “but, but, I KNOW I have seen color images from Hubble before!” In fact, you’ve probably never seen a grayscale Hubble image, so what’s up? It all comes from what’s called post-processing. Just like a color camera can combine color information from three detectors to make the image look true-to-life, astronomers can take three (or more!) images through different filters and combine them later to make a color picture. There are two main approaches to doing this, known colloquially as “true color” and “false color.”

A "true color" image of the surface of Jupiter's moon Europa as seen by the Galileo spacecraft. Image credit: NASA/JPL-Caltech/SETI Institute
A “true color” image of the surface of Jupiter’s moon Europa as seen by the Galileo spacecraft. Image credit: NASA/JPL-Caltech/SETI Institute

True color images strive to work just like your smartphone camera. The spacecraft captures images through filters which span the visible spectrum, so that, when combined, the result is similar to what you’d see with your own eyes. The recently released Galileo image of Europa is a gorgeous example of this.

Our eyes would never see the Crab Nebula as this Hubble image shows it. Image credit: NASA, ESA, J. Hester and A. Loll (Arizona State University)
Our eyes would never see the Crab Nebula as this Hubble image shows it. Image credit: NASA, ESA, J. Hester and A. Loll (Arizona State University)

False color images aren’t limited by what our human eyes can see. They assign different colors to different features within an image. Take this famous image of the Crab Nebula, for instance. The red in the image traces oxygen atoms that have had electrons stripped away. Blue traces normal oxygen and green indicates sulfur. The result is a gorgeous image, but not one that we could ever hope to see for ourselves.

So, if we can make color images, why don’t we always? Again, the laws of physics step in to spoil the fun. For one, things in space are constantly moving, usually really, really quickly. Perhaps you saw the first color image of comet 67P/Churyumov-Gerasimenko released recently. It’s kind of blurry, isn’t it? That’s because both the Rosetta spacecraft and the comet moved in the time it took to capture the three separate images. When combined, they don’t line up perfectly and the image blurs. Not great!

The first color image of comet 67P/Churyumov-Gerasimenko. Image credit: ESA/Rosetta
The first color image of comet 67P/Churyumov-Gerasimenko. Image credit: ESA/Rosetta

But it’s the inverse-square law that is the ultimate challenge here. Radio waves, as a form of light, also rapidly become weaker with distance. When it takes 90 minutes to send back a single HiRISE image from the Mars Reconnaissance Orbiter, every shot counts and spending three on the same target doesn’t always make sense.

Finally, images, even color ones, are only one piece of the space exploration puzzle. Other observations, from measuring the velocity of dust grains to the composition of gases, are no less important to understanding the mysteries of nature. So, next time you see an eye-opening image, don’t mind that it’s in shades of gray. Just imagine everything else that lack of color is letting us learn.

NASA Airship Could Watch The Stars Without The Need Of a Rocket

Artist's concept of a NASA airship that would fly at a suborbital altitudes for hours at a time. Credit: Mike Hughes (Eagre Interactive)/Keck Institute for Space Studies

Dreams of space are often tied to jet engines or solar sails or taking a ride on a rocketship. But it’s often quite efficient to do research from Earth, especially from the high reaches of the atmosphere where there are few molecules to get in the way of observations.

NASA wants to do more of this kind of astronomy with an airship — but at an extreme height of 65,000 feet (20 kilometers) for 20 hours. No powered-airship mission has managed to last past eight hours at this height because of the winds in that zone, but NASA is hoping that potential creators would be up to the challenge.

This isn’t a guaranteed mission yet. NASA has a solicitation out right now to gauge interest from the community, and to figure out if it is technically feasible. This program would be a follow-on to ideas such as SOFIA, a flying stratospheric telescope that the agency plans to defund in future budgets.

Their goal is to fly an airship with a 44-pound (20-kilogram) payload at this altitude for 20 hours. If a company is feeling especially able, it can even try for a more difficult goal: a 440-pound (200-kilogram) payload for 200 hours.

NASA's Stratospheric Observatory for Infrared Astronomy 747SP aircraft flies over Southern California's high desert during a test flight in 2010. Credit: NASA/Jim Ross
NASA’s Stratospheric Observatory for Infrared Astronomy 747SP aircraft flies over Southern California’s high desert during a test flight in 2010. Credit: NASA/Jim Ross

“We are seeking to take astronomy and Earth science to new heights by enabling a long-duration, suborbital platform for these kinds of research,” stated lead researcher Jason Rhodes, an astrophysicist at NASA’s Jet Propulsion Laboratory in California.

And why not just use a balloon? It comes down to communications, NASA says: “Unlike a balloon, which travels with air currents, airships can stay in one spot,” the agency states. “The stationary nature of airships allows them to have better downlink capabilities, because there is always a line-of-sight communication.”

If the prize goes forward, NASA is considering awarding $2 million to $3 million across multiple prizes. You can get more on the official request for information at this link.

Source: NASA

Live Discussion: How Good is the Science of “Interstellar?”

Kip Thorne’s concept for a black hole in 'Interstellar.' Image Credit: Paramount Pictures

The highly anticipated film “Interstellar” is based on science and theory; from wormholes, to the push-pull of gravity on a planet, to the way a black hole might re-adjust your concept of time. But just how much of the movie is really true to what we know about the Universe? There has also been some discussion whether the physics used for the visual effects in the movie actually was good enough to produce some science. But how much of it is just creative license?

Today, (Wed. November 26) at 19:00 UTC (3 pm EDT, 12:00 pm PDT), the Kavli foundation hosts a live discussion with three astrophysicists who will answer viewers’ questions about black holes, relativity and gravity, to separate the movie’s science facts from its science fiction.

According to the Kavli twitter feed, the Hangout will even help you understand what in the world happened at the end of the movie!

Scientists Mandeep Gill, Eric Miller and Hardip Sanghera will answer your questions in the live Google Hangout.

Submit questions ahead of and during the webcast by emailing [email protected] or by using the hashtag #KavliSciBlog on Twitter or Google+.

You can watch today’s hangout here:

Also, you can enjoy the “Interstellar” trailer:

A Thousand Days ‘Til Totality: Anticipating the 2017 Solar Eclipse

The total solar eclipse of November 2012 as seen from

Where will YOU be on August 21st, 2017?

Astronomy is all about humility and thinking big in terms of space and time. It’s routine for astronomers to talk of comets on thousand year orbits, or stars with life spans measured in billions of years…

Yup, the lifespan of your average humanoid is indeed fleeting, and pales in comparison to the universe, that’s for sure. But one astronomical series that you can hope to live through is the cycle of eclipses.

I remember reading about the total solar eclipse of February 26th, 1979 as a kid. Carter was in the White House, KISS was mounting yet another comeback, and Voyager 1 was wowing us with images of Jupiter. That was also the last total solar eclipse to grace mainland United States in the 20th century.

But the ongoing “eclipse-drought” is about to be broken.

The path
The path of totality across the United States on August 21st, 2017. Credit: Great American Eclipse.com.

One thousand days from this coming Monday, November 24th on August 21st 2017, the shadow of the Moon will touch down off of the Oregon coast and sweep eastward across the U.S. heartland before heading out to the Atlantic off of the coast of South Carolina. Millions live within a days’ drive of the 115 kilometre wide path, and the eclipse has the added plus of occurring at the tail end of summer vacation season. This also means that lots of folks will be camping and otherwise mobile with their RVs and able to journey to the event.

The Great American Eclipse of 2017 from Michael Zeiler on Vimeo.

This is also the last total solar eclipse to pass over any of the 50 United States since July 11th, 1991, and the first eclipse to cross the  contiguous United States from “sea to shining sea” since way back on June 8th, 1918.

Think it’s too early to prepare?  Towns across the path, including Hopkinsville, Kentucky and towns in Kansas and Nebraska are already laying plans for eclipse day. Other major U.S. cities, such as Nashville, Idaho Falls, and Columbia, South Carolina also lie along the path of totality, and the spectacle promises to spawn a whole new generation of “umbraphiles” or eclipse chasers.

A total solar eclipse is an unforgettable sight. But unlike a total lunar eclipse, which can be viewed from the moonward-facing hemisphere of the Earth, one generally has to journey to the narrow path of totality to see a total solar eclipse. Totality rarely comes to you.

Viewing
The Zeilers view the November 2013 eclipse from Africa. Credit: Michael Zeiler.

And don’t settle for a 99% partial eclipse just outside the path. “There’s no comparison between partial and total solar eclipses when it comes to sheer grandeur and beauty,” Michael Zeiler, longtime eclipse chaser and creator of the Great American Eclipse website told Universe Today. We witnessed the 1994 annular solar eclipse of the Sun from the shores of Lake Erie, and can attest that a 99% partial eclipse is still pretty darned bright!

There are two total solar eclipses remaining worldwide up until 2017: One on March 20th, 2015 crossing the high Arctic, and another on March 9th 2016 over Southeast Asia. The 2017 eclipse offers a maximum of 2 minutes and 41 seconds of totality, and weather prospects for the eclipse in late August favors viewers along the northwestern portion of the track.

And though an armada of cameras will be prepared to capture the eclipse along its trek across the U.S., many veteran eclipse chasers suggest that first time viewers merely sit back and take in the moment. The onset of totality sees a bizarre sort of twilight fall across the landscape, as shadow bands skip across the countryside, temperatures drop, and wildlife is fooled into thinking that nightfall has come early.

And then, all too soon, the second set of blinding diamond rings burst through the lunar valleys, the eclipse glasses go back on, and totality is over. Which always raises the question heard throughout the crowd post-eclipse:

When’s the next one?

Well, the good news is, the United States will host a second total solar eclipse on April 8th, 2024, just seven years later! This path will run from the U.S. Southwest to New England, and crisscross the 2017 path right around Carbondale, Illinois.

Will the woo that surfaced around the approach of Comet ISON and the lunar tetrad of “blood Moon eclipses” rear its head in 2017? Ah, eclipses and comets seem to bring ‘em out of the woodwork, and 2017 will likely see a spike in the talking-head gloom and doom videos ala YouTube. Some will no doubt cite the “perfection” seen during total solar eclipses as proof of divine inspiration, though this is actually just a product of our vantage point in time and space. In fact, annular eclipses are slightly more common than total solars in our current epoch, and will become more so as the Moon slowly recedes from the Earth. And we recently noted in our post on the mutual phenomena of Jupiter’s moons that solar eclipses very similar to those seen from the Earth can also be spied from Callisto.

Heads up to any future interplanetary eclipse resort developer: Callisto is prime real estate.

Forget Mars... "Get your ass to totality!"
Forget Mars… “Get your ass to totality!” Credit: Great American Eclipse.

The 2017 total solar eclipse across America will be one for the history books, that’s for sure.

So get those eclipse safety glasses now, and be sure to keep ‘em handy through 2017 and onward to 2024!

-Read Dave Dickinson’s eclipse-fueled science fiction tales Shadowfall and Exeligmos.

BICEP2 All Over Again? Researchers Place Higgs Boson Discovery in Doubt

This is the signature of one of 100s of trillions of particle collisions detected at the Large Hadron Collider. The combined analysis lead to the discovery of the Higgs Boson. This article describes one team in dissension with the results. (Photo Credit: CERN)

At the Large Hadron Collider (LHC) in Europe, faster is better. Faster means more powerful particle collisions and looking deeper into the makeup of matter. However, other researchers are proclaiming not so fast. LHC may not have discovered the Higgs Boson, the boson that imparts mass to everything, the god particle as some have called it. While the Higgs Boson discovery in 2012 culminated with the awarding in December 2013 of the Nobel Prize to Peter Higgs and François Englert, a team of researchers has raised these doubts about the Higgs Boson in their paper published in the journal Physical Review D.

The discourse is similar to what unfolded in the last year with the detection of light from the beginning of time that signified the Inflation epoch of the Universe. Researchers looking into the depths of the Universe and the inner depths of subatomic particles are searching for signals at the edge of detectability, just above the noise level and in proximity to the signals from other sources. For the BICEP2 telescope observations (previous U.T. articles), its pretty much back to the drawing board but the Higgs Boson (previous U.T. articles) doubts are definitely challenging but needing more solid evidence. In human affairs, if the Higgs Boson was not detected by the LHC, what does one do with an awarded Nobel Prize?

Cross-section of the Large Hadron Collider where its detectors are placed and collisions occur. LHC is as much as 175 meters (574 ft) below ground on the Frence-Swiss border near Geneva, Switzerland. The accelerator ring is 27 km (17 miles) in circumference. (Photo Credit: CERN)
Cross-section of the Large Hadron Collider where its detectors are placed and collisions occur. LHC is as much as 175 meters (574 ft) below ground on the Franco-Swiss border near Geneva, Switzerland. The accelerator ring is 27 km (17 miles) in circumference. (Photo Credit: CERN)

The present challenge to the Higgs Boson is not new and is not just a problem of detectability and acuity of the sensors as is the case with BICEP2 data. The Planck space telescope revealed that light radiated from dust combined with the magnetic field in our Milky Way galaxy could explain the signal detected by BICEP2 that researchers proclaimed as the primordial signature of the Inflation period. The Higgs Boson particle is actually a prediction of the theory proposed by Peter Higgs and several others beginning in the early 1960s. It is a predicted particle from gauge theory developed by Higgs, Englert and others, at the heart of the Standard Model.

This recent paper is from a team of researchers from Denmark, Belgium and the United Kingdom led by Dr. Mads Toudal Frandsen. Their study entitled, “Technicolor Higgs boson in the light of LHC data” discusses how their supported theory predicts Technicolor quarks through a range of energies detectable at LHC and that one in particular is within the uncertainty level of the data point declared to be the Higgs Boson. There are variants of Technicolor Theory (TC) and the research paper compares in detail the field theory behind the Standard Model Higgs and the TC Higgs (their version of the Higgs boson). Their conclusion is that a TC Higgs is predicted by Technicolor Theory that is consistent with expected physical properties, is low mass and has an energy level – 125 GeV – indistinguishable from the resonance now considered to be the Standard Model Higgs. Theirs is a composite particle and it does not impart mass upon everything.

So you say – hold on! What is a Technicolor in jargon of particle physics? To answer this you would want to talk to a plumber from South Bronx, New York – Dr. Leonard Susskind. Though no longer a plumber, Susskind first proposed Technicolor to describe the breaking of symmetry in gauge theories that are part of the Standard Model. Susskind and other physicists from the 1970s considered it unsatisfactory that many arbitrary parameters were needed to complete the Gauge theory used in the Standard Model (involving the Higgs Scalar and Higgs Field). The parameters consequently defined the mass of elementary particles and other properties. These parameters were being assigned and not calculated and that was not acceptable to Susskind, ‘t Hooft, Veltmann and others. The solution involved the concept of Technicolor which provided a “natural” means of describing the breakdown of symmetry in the gauge theories that makeup the Standard Model.

Technicolor in particle physics shares one simple thing in common with Technicolor that dominated the early color film industry – the term composite in creating color or particles.

Dr. Leonard Susskind, a leading developer of the Theory of Technicolor (left) and Nobel Prize winner Dr. Peter Higgs who proposed the existence of a particle that imparts mass to all matter - the Higgs Boson (right). (Photo Credit: University of Stanford, CERN)
Dr. Leonard Susskind, a leading developer of the Theory of Technicolor (left) and Nobel Prize winner Dr. Peter Higgs who proposed the existence of a particle that imparts mass to all matter – the Higgs Boson (right). (Photo Credit: University of Stanford, CERN)

If the theory surrounding Technicolor is correct, then there should be many techni-quark and techni-Higgs particles to be found with the LHC or a more powerful next generation accelerator; a veritable zoo of particles besides just the Higgs Boson. The theory also means that these ‘elementary’ particles are composites of smaller particles and that another force of nature would be needed to bind them. And this new paper by Belyaev, Brown, Froadi and Frandsen claims that one specific techni-quark particle has a resonance (detection point) that is within the uncertainty of measurements for the Higgs Boson. In other words, the Higgs Boson might not be “the god particle” but rather a Technicolor Quark particle comprised of smaller more fundamental particles and another force binding them.

This paper by Belyaev, Brown, Froadi and Frandsen is a clear reminder that the Standard Model is unsettled and that even the discovery of the Higgs Boson is not 100% certain. In the last year, more sensitive sensors have been integrated into CERN’s LHC which will help refute this challenge to Higgs theory – Higgs Scalar and Field, the Higgs Boson or may reveal the signatures of Technicolor particles. Better detectors may resolve the difference between the energy level of the Technicolor quark and the Higgs Boson. LHC researchers were quick to state that their work moves on beyond discovery of the Higgs Boson. Also, their work could actually disprove that they found the Higgs Boson.

Contacting the co-investigator Dr. Alexander Belyaev, the question was raised – will the recent upgrades to CERN accelerator provide the precision needed to differentiate a technie-Quark from the Higg’s particle?

“There is no guarantee of course” Dr. Belyaev responded to Universe Today, “but upgrade of LHC will definitely provide much better potential to discover other particles associated with theory of Technicolor, such as heavy Techni-mesons or Techni-baryons.”

Resolving the doubts and choosing the right additions to the Standard Model does depend on better detectors, more observations and collisions at higher energies. Presently, the LHC is down to increase collision energies from 8 TeV to 13 TeV. Among the observations at the LHC, Super-symmetry has not fared well and the observations including the Higgs Boson discovery has supported the Standard Model. The weakness of the Standard Model of particle physics is that it does not explain the gravitational force of nature whereas Super-symmetry can. The theory of Technicolor maintains strong supporters as this latest paper shows and it leaves some doubt that the Higgs Boson was actually detected. Ultimately another more powerful next-generation particle accelerator may be needed.

In a previous Universe Today story, the question was raised - is the Standard Model a Rube Goldberg Device? Most theorists would say 'no' but it is unlikely to reach the status of the 'theory of everything' (Illustration Credit: R.Goldberg- the toothpaste dispenser, variant T.Reyes)
In a previous Universe Today story, the question was raised – is the Standard Model a Rube Goldberg Device? Most theorists would say ‘no’ but it is unlikely to reach the status of the ‘theory of everything’ (Illustration Credit: R.Goldberg- the toothpaste dispenser, variant T.Reyes)

For Higgs and Englert, the reversal of the discovery is by no means the ruination of a life’s work or would be the dismissal of a Nobel Prize. The theoretical work of the physicists have long been recognized by previous awards. The Standard Model as, at least, a partial solution of the theory of everything is like a jig-saw puzzle. Piece by piece is how it is being developed but not without missteps. Furthermore, the pieces added to the Standard Model can be like a house of cards and require replacing a larger solution with a wholly other one. This could be the case of Higgs and Technicolor.

At times like children somewhat determined, physicists thrust a solution into the unfolding puzzle that seems to fit but ultimately has to be retracted. The present discourse does not yet warrant a retraction. Elegance and simplicity is the ultimate characteristics sought in theoretical solutions. Particle physicists also use the term Naturalness when describing the concerns with gauge theory parameters. The solutions – the pieces – of the puzzle created by Peter Higgs and François Englert have spearheaded and encouraged further work which will achieve a sounder Standard Model but few if any claim that it will emerge as the theory of everything.

References:

Pre-print of Technicolor Higgs boson in the light of LHC data

An Introduction to Technicolor, P. Sikivie, CERN, October 1980

Technicolour, Farhi & Susskind, March 1981

Review: In “Interstellar,” Christopher Nolan Shows He Has The Right Stuff

Mathew McConnaughey wades through an ocean on another planet. This is not a fishing expedition. He is out to save his children and all humanity. Image courtesy Paramount.

Science fiction aficionados, take heed. The highly-anticipated movie Interstellar is sharp and gripping. Nolan and cast show in the end that they have the right stuff. Nearly a three hour saga, it holds your attention and keeps you guessing. Only a couple of scenes seemed to drift and lose focus. Interstellar borrows style and substance from some of the finest in the genre and also adds new twists while paying attention to real science. If a science-fiction movie shies away from imagining the unknown, taking its best shot of what we do not know, then it fails a key aspect of making sci-fi. Interstellar delivers in this respect very well.

Jessica Chastain, the grown daughter of astronaut McConnaughey starts to torch the cornfields. Interstellar viewers are likely to show no sympathy to the ever present corn fields.
Jessica Chastain, the grown daughter of astronaut McConnaughey takes a torch to the cornfields. Interstellar viewers are likely to show no sympathy to the ever present corn fields. Image courtesy Paramount.

The movie begins quite unassuming in an oddly green but dusty farmland. It does not rely on showing off futuristic views of Earth and humanity to dazzle us. However, when you see a farming family with a dinner table full of nothing but variations of their cash crop which is known mostly as feedstock for swine and cattle, you know humanity is in some hard times. McConaughey! Save us now! I do not want to live in such a future!

One is left wondering about what got us to the conditions facing humanity from the onset of the movie. One can easily imagine a couple of hot topic issues that splits the American public in two. But Nolan doesn’t try to add a political or religious bent to Interstellar. NASA is in the movie but apparently after decades of further neglect, it is literally a shadow of even its present self.

Somehow, recent science fiction movies — Gravity being one exception — would make us believe that the majority of American astronauts are from the Midwest. Driving a John Deere when you are 12, being raised under big sky or in proximity to the home of the Wright Brothers would make you hell-bent to get out of Dodge and not just see the world but leave the planet. Matthew McConaughey adds to that persona.

Dr. Kip Thorne made it clear that black is not the primary hue of Black Holes. His guidance offered to Nolan raised science fiction to a new level.
Dr. Kip Thorne made it clear that black is not the primary hue of Black Holes. His guidance offered to Nolan raised science fiction to a new level. Image courtesy Paramount.

We are seemingly in the golden age of astronomy. At present, a science fiction movie with special effects can hardly match the imagery that European and American astronomy is delivering day after day. There is one of our planets that gets a very modest delivery in Interstellar. An undergraduate graphic artist could take hold of NASA imagery and outshine those scenes quite easily. However, it appears that Nolan did not see it necessary to out-do every scene of past sci-fi or every astronomy picture of the day (APOD) to make a great movie.

Nolan drew upon American astro-physicist Dr. Kip Thorne, an expert on Einstein’s General Relativity, to deliver a world-class presentation of possibly the most extraordinary objects in our Universe – black holes. It is fair to place Thorne alongside the likes of Sagan, Feynman, Clarke and Bradbury to advise and deliver wonders of the cosmos in compelling cinematic form. In Instellar, using a black hole in place of a star to hold a planetary system is fascinating and also a bit unbelievable. Whether life could persist in such a system is a open question. There is one scene that will distress most everyone in and around NASA that involves the Apollo Moon landings and one has to wonder if Thorne was pulling a good one on old NASA friends.

Great science fiction combines a vision of the future with a human story. McConaughey and family are pretty unassuming. John Lithgow, who plays grandpa, the retired farmer, doesn’t add much and some craggy old character actor would have been just fine. Michael Cane as the lead professor works well and Cane’s mastery is used to thicken and twist the plot. His role is not unlike the one in Children of Men. He creates bends in the plot that the rest of the cast must conform to.

There was one piece of advice I read in previews of Interstellar. See it in Imax format. So I ventured over to the Imax screening at the Technology Museum in Silicon Valley. I think this advice was half correct. The Earthly scenes gained little or nothing from Imax but once they were in outer space, Imax was the right stuff. Portraying a black hole and other celestial wonders is not easy for anyone including the greatest physicists of our era and Thorne and Nolan were right to use Imax format.

According to industry insiders, Nolan is one of a small group of directors with the clout to demand film recording rather than digital. Director Nolan used film and effects to give Interstellar a very earthy organic feel. That worked and scenes transitioned pretty well to the sublime of outer space. Interstellar now shares the theaters with another interesting movie with science fiction leanings. The Stephen Hawking biography, “The Theory of Everything” is getting very good reviews. They hold different ties to science and I suspect sci-fi lovers will be attracted to seeing both. With Interstellar, out just one full day and I ran into moviegoers that had already seen it more than once.

Where does Interstellar stand compared to Stanley Kubricks works? It doesn’t make that grade of science fiction that stands up as a century-class movie. However, Thorne’s and Nolan’s accounting of black holes and worm holes and the use of gravity is excellent. Instellar makes a 21st Century use of gravity in contrast to Gravity that was stuck in the 20th Century warning us to be careful where you park your space vehicle. In the end, Matthew McConaughey serves humanity well. Anne Hathaway plays a role not unlike Jody Foster in Contact – an intellectual but sympathetic female scientist.

Jessica Chastain playing the grown up daughter of McConaughey brings real angst and an edge to the movie; even Mackenzie Foy playing her part as a child. Call it the view ports for each character – they are short and narrow and Chastain uses hers very well. Matt Damon shows up in a modest but key role and does not disappoint. Nolan’s directing and filmography is impressive, not splashy but one is gripped by scenes. Filming in the small confines of spaceships and spacesuits is challenging and Nolan pulls it off very well. Don’t miss Interstellar in the theaters. It matches and exceeds the quality of several recent science fiction movies. Stepping back onto the street after the movie, the world seemed surprisingly comforting and I was glad to be back from the uncertain future Nolan created.

Building A Space Base, Part 2: How Much Money Would It Take?

Artist's concept for a Lunar base. Credit: NASA

How much would it cost to establish a space base close to Earth, say on the Moon or an asteroid? To find out, Universe Today spoke with Philip Metzger, a former senior research physicist at NASA’s Kennedy Space Center, who has explored this subject extensively on his website and in published papers.

Yesterday, Metzger outlined the rationale for establishing a base in the first place, while today he focuses on the cost.

UT: Your 2012 paper specifically talks about how much development is needed on the Moon to make the industry “self-sustaining and expanding”, but left out the cost of getting the technology ready and of their ongoing operation. Why did you leave this assessment until later? How can we get a complete picture of the costs?

PM: As we stated at the start of the paper, our analysis was very crude and was intended only to garner interest in the topic so that others might join us in doing a more complete, more realistic analysis. The interest has grown faster than I expected, so maybe we will start to see these analyses happening now including cost estimates. Previous analyses talked about building entire factories and sending them into space. The main contribution of our initial paper was to point out that there is this bootstrapping strategy that has not been discussed previously, and we argued that it makes more sense. It will result in a much smaller mass of hardware launched into space, and it will allow us to get started right away so that we can figure out how to make the equipment work as we go along.

Moonbase rover concept - could be used for long-term missions (NASA)
Moonbase rover concept – could be used for long-term missions (NASA)

Trying to design up front everything in a supply chain for space is impossible. Even if we got the budget for it and gave it a try, we would discover that it wouldn’t work when we sent it into the extraterrestrial environments.  There are too many things that could go wrong.  Evolving it in stages will allow us to work out the bugs as we develop it in stages. So the paper was arguing for the community to take a look into this new strategy for space industry.

Now, having said that, I can still give you a very crude cost estimate if you want one. Our model shows a total of about 41 tons of hardware being launched to the Moon, but that results in 100,000 tons of hardware when we include what was made there along the way. If 41 tons turns out to be correct, then let’s take 41% of the cost of the International Space Station as a crude estimate, because that has a mass of 100 tons and we can roughly estimate that a ton of space hardware costs about the same in every program. Then let’s multiply by four because it takes four tons of mass launched to low Earth orbit to land one ton on the Moon.

That may be an over-estimate, because the biggest cost of the International Space Station was the labor to design, build, assemble, and test before launch, including the cost of operating the space shuttle fleet. But the hardware for space industry includes many copies of the same parts so design costs should be lower, and since human lives will not be at stake they don’t need to be as reliable. As discussed in the paper, the launch costs will also be much reduced with the new launch systems coming on line.

The International Space Station in March 2009 as seen from the departing STS-119 space shuttle Discovery crew. Credit: NASA/ESA
The International Space Station in March 2009 as seen from the departing STS-119 space shuttle Discovery crew. Credit: NASA/ESA

Furthermore, the cost can be divided by 3.5 according to the crude modeling, because 41 tons is needed only if the industry is making copies of itself as fast as it can. If we slow it down to making just one copy of the industry along the way as it is evolving, then only 12 tons of hardware needs to be sent to the Moon. Now that gives us an estimate of the total cost over the entire bootstrapping period, so if we take 20 or 30 or 40 years to accomplish it, then divide by that amount to get the annual cost. You end up with a number that is a minority fraction of NASA’s annual budget, and a miniscule fraction of the total U.S. federal budget, and even tinier fraction of the US gross domestic product, and an utterly insignificant cost per human being in the developed nations of the Earth.

Even if we are off by a factor of 10 or more, it is something we can afford to start doing today. And this doesn’t account for the economic payback we will be getting while starting space industry. There will be intermediate ways to get a payback, such as refueling communications satellites and enabling new scientific activities. The entire cost needn’t be carried by taxpayers, either. It can be funded in part by commercial interests, and in part by students and others taking part in robotics contests.  Perhaps we can arrange shares of ownership in space industry for people who volunteer time developing technologies and doing other tasks like teleoperating robots on the Moon. Call that “telepioneering.”

Perhaps most importantly, the technologies we will be developing – advanced robotics and manufacturing – are the same things we want to be developing here on Earth for the sake of our economy, anyway. So it is a no-brainer to do this! There are also intangible benefits: giving students enthusiasm to excel in their education, focusing the efforts of the maker community to contribute tangibly to our technological and economic growth, and renewing the zeitgeist of our culture.  Civilizations fall when they become old and tired, when their enthusiasm is spent and they stop believing in the inherent value of what they do. Do we want a positive, enthusiastic world working together for the greater good? Here it is.

The Japanese Kibo robotic arm on the International Space Station deploys CubeSats during February 2014. The arm was holding a Small Satellite Orbital Deployer to send out the small satellites during Expedition 38. Credit: NASA
The Japanese Kibo robotic arm on the International Space Station deploys CubeSats during February 2014. The arm was holding a Small Satellite Orbital Deployer to send out the small satellites during Expedition 38. Credit: NASA

UT: We now have smaller computers and the ability to launch CubeSats or smaller accompanying satellites on rocket launches, something that wasn’t available a few decades ago. Does this reduce the costs of sending materials to the Moon for the purposes of what we want to do there?

Most of the papers about starting the space industry are from the 1980’s and 1990’s because that is when most of the investigations were performed, and there hasn’t been funding to continue their work in recent decades.  Indeed, changes in technology since then have been game-changing! Back then some studies were saying that a colony would need to support 10,000 humans in space to do manufacturing tasks before it could make a profit and become economically self-sustaining. Now because of the growth of robotics we think we can do it with zero humans, which drastically cuts the cost.

The most complete study of space industry was the 1980 Summer Study at the Ames Research Center. They were the first to discuss the vision of having space industry fully robotic.  They estimated mining robots would need to be made with several tons of mass. More recently, we have actually built lunar mining robots at the Swamp Works at the Kennedy Space Center and they are about one tenth of a ton, each. So we have demonstrated a mass reduction of more than 10 times.

But this added sophistication will be harder to manufacture on the Moon. Early generations will not be able to make the lightweight metal alloys or the electronics packages.  That will require a more complex supply chain. The early generations of space industry should not aim to make things better; they should aim to make things easier to make. “Appropriate Technology” will be the goal. As the supply chain evolves, eventually it will reach toward the sophistication of Earth. Still, as long as the supply chain is incomplete and we are sending things from Earth, we will be sending the lightest and most sophisticated things we can to be combined with the crude things made in space, and so the advances we’ve made since the 1980’s will indeed reduce the bootstrapping cost.

This is the second in a three-part series about building a space base. Yesterday: Why mine on the moon or an asteroid? Tomorrow: Making remote robots smart.