Radio waves absent from the reputed megastructure-encompassed Kepler star?

Radio observations were carried out from the Allen Telescope Array of the reputed megastructure-encompassed star KIC 8462852.

Astronomers at the SETI institute (search for extraterrestrial intelligence) have reported their findings after monitoring the reputed megastructure-encompassed star KIC 8462852.  No significant radio signals were detected in observations carried out from the Allen Telescope Array between October 15-30th (nearly 12 hours each day).  However, there are caveats, namely that the sensitivity and frequency range were limited, and gaps existed in the coverage (e.g., between 6-7 Ghz).

Lead author Gerald Harp and the SETI team discussed the various ideas proposed to explain the anomalous Kepler brightness measurements of KIC 8462852, “The unusual star KIC 8462852 studied by the Kepler space telescope appears to have a large quantity of matter orbiting quickly about it. In transit, this material can obscure more than 20% of the light from that star. However, the dimming does not exhibit the periodicity expected of an accompanying exoplanet.”  The team went on to add that, “Although natural explanations should be favored; e.g., a constellation of comets disrupted by a passing star (Boyajian et al. 2015), or gravitational darkening of an oblate star (Galasyn 2015), it is interesting to speculate that the occluding matter might signal the presence of massive astroengineering projects constructed in the vicinity of KIC 8462582 (Wright, Cartier et al. 2015).”

One such megastructure was discussed in a famous paper by Freeman Dyson (1960), and subsequently designated a ‘Dyson Sphere‘.  In order to accommodate an advanced civilisation’s increasing energy demands, Dyson remarked that, “pressures will ultimately drive an intelligent species to adopt some such efficient exploitation of its available resources. One should expect that, within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere which completely surrounds its parent star.”  Dyson further proposed that a search be potentially conducted for artificial radio emissions stemming from the vicinity of a target star.



An episode of Star Trek TNG featured a memorable discussion regarding a ‘Dyson Sphere‘.

The SETI team summarized Dyson’s idea by noting that Solar panels could serve to capture starlight as a source of sustainable energy, and likewise highlighted that other, “large-scale structures might be built to serve as possible habitats (e.g., “ring worlds”), or as long-lived beacons to signal the existence of such civilizations to technologically advanced life in other star systems by occluding starlight in a manner not characteristic of natural orbiting bodies (Arnold 2013).”  Indeed, bright variable stars such as the famed Cepheid stars have been cited as potential beacons.



The Universe Today’s Fraser Cain discusses a ‘Dyson Sphere‘.

If a Dyson Sphere encompassed the Kepler catalogued star, the SETI team were seeking in part to identify spacecraft that may service a large structure and could be revealed by a powerful wide bandwidth signal.  The team concluded that their radio observations did not reveal any significant signal stemming from the star (e.g., Fig 1 below).  Yet as noted above, the sensitivity was limited to above 100 Jy and the frequency range was restricted to 1-10 Ghz, and gaps existed in that coverage.

Fig 1 from Harp et al. 2015 (http://arxiv.org/abs/1511.01606) indicating the lack of signal detected for the Kepler star (black symbols).
Fig 1 from Harp et al. (2015) conveys the lack of radio waves emerging from the star KIC 8462852 (black symbols), however there were sensitivity and coverage limitations (see text).  The signal emerging from the quasar 3c84 is shown via blue symbols.

What is causing the odd brightness variations seen in the Kepler star KIC 8462852?   Were those anomalous variations a result of an unknown spurious artefact from the telescope itself, a swath of comets temporarily blocking the star’s light, or perhaps something more extravagant.  The latter should not be hailed as the de facto source simply because an explanation is not readily available.  However, the intellectual exercise of contemplating the technology advanced civilisations could construct to address certain needs (e.g., energy) is certainly a worthy venture.

What’s the Big Deal About the Pentaquark?

The pentaquark, a novel arrangement of five elementary particles, has been detected at the Large Hadron Collider. This particle may hold the key to a better understanding of the Universe's strong nuclear force. [Image credit: CERN/LHCb experiment]

“Three quarks for Muster Mark!,” wrote James Joyce in his labyrinthine fable, Finnegan’s Wake. By now, you may have heard this quote – the short, nonsensical sentence that eventually gave the name “quark” to the Universe’s (as-yet-unsurpassed) most fundamental building blocks. Today’s physicists believe that they understand the basics of how quarks combine; three join up to form baryons (everyday particles like the proton and neutron), while two – a quark and an antiquark – stick together to form more exotic, less stable varieties called mesons. Rare four-quark partnerships are called tetraquarks. And five quarks bound in a delicate dance? Naturally, that would be a pentaquark. And the pentaquark, until recently a mere figment of physics lore, has now been detected at the LHC!

So what’s the big deal? Far from just being a fun word to say five-times-fast, the pentaquark may unlock vital new information about the strong nuclear force. These revelations could ultimately change the way we think about our superbly dense friend, the neutron star – and, indeed, the nature of familiar matter itself.

Physicists know of six types of quarks, which are ordered by weight. The lightest of the six are the up and down quarks, which make up the most familiar everyday baryons (two ups and a down in the proton, and two downs and an up in the neutron). The next heaviest are the charm and strange quarks, followed by the top and bottom quarks. And why stop there? In addition, each of the six quarks has a corresponding anti-particle, or antiquark.

particles
Six types of quark, arranged from left to right by way of their mass, depicted along with the other elementary particles of the Standard Model. The Higgs boson was added to the right side of the menagerie in 2012. (Image Credit: Fermilab)

An important attribute of both quarks and their anti-particle counterparts is something called “color.” Of course, quarks do not have color in the same way that you might call an apple “red” or the ocean “blue”; rather, this property is a metaphorical way of communicating one of the essential laws of subatomic physics – that quark-containing particles (called hadrons) always carry a neutral color charge.

For instance, the three components of a proton must include one red quark, one green quark, and one blue quark. These three “colors” add up to a neutral particle in the same way that red, green, and blue light combine to create a white glow. Similar laws are in place for the quark and antiquark that make up a meson: their respective colors must be exactly opposite. A red quark will only combine with an anti-red (or cyan) antiquark, and so on.

The pentaquark, too, must have a neutral color charge. Imagine a proton and a meson (specifically, a type called a J/psi meson) bound together – a red, a blue, and a green quark in one corner, and a color-neutral quark-antiquark pair in the other – for a grand total of four quarks and one antiquark, all colors of which neatly cancel each other out.

Physicists are not sure whether the pentaquark is created by this type of segregated arrangement or whether all five quarks are bound together directly; either way, like all hadrons, the pentaquark is kept in check by that titan of fundamental dynamics, the strong nuclear force.

The strong nuclear force, as its name implies, is the unspeakably robust force that glues together the components of every atomic nucleus: protons and neutrons and, more crucially, their own constituent quarks. The strong force is so tenacious that “free quarks” have never been observed; they are all confined far too tightly within their parent baryons.

But there is one place in the Universe where quarks may exist in and of themselves, in a kind of meta-nuclear state: in an extraordinarily dense type of neutron star. In a typical neutron star, the gravitational pressure is so tremendous that protons and electrons cease to be. Their energies and charges melt together, leaving nothing but a snug mass of neutrons.

Physicists have conjectured that, at extreme densities, in the most compact of stars, adjacent neutrons within the core may even themselves disintegrate into a jumble of constituent parts.

The neutron star… would become a quark star.

The difference between a neutron star and a quark star (Chandra)
The difference between a neutron star and a quark star. (Image Credit: Chandra)

Scientists believe that understanding the physics of the pentaquark may shed light on the way the strong nuclear force operates under such extreme conditions – not only in such overly dense neutron stars, but perhaps even in the first fractions of a second following the Big Bang. Further analysis should also help physicists refine their understanding of the ways that quarks can and cannot combine.

The data that gave rise to this discovery – a whopping 9-sigma result! – came out of the LHC’s first run (2010-2013). With the supercollider now operating at double its original energy capacity, physicists should have no problem unraveling the mysteries of the pentaquark even further.

A preprint of the pentaquark discovery, which has been submitted to the journal Physical Review Letters, can be found here.

Will the March 20th Total Solar Eclipse Impact Europe’s Solar Energy Grid?

The first eclipse of 2015 is coming right up on Friday, March 20th, and may provide a unique challenge for solar energy production across Europe.

Sure, we’ve been skeptical about many of the websites touting a ‘blackout’ and Y2K-like doom pertaining to the March 20th total solar eclipse as of late. And while it’s true that comets and eclipses really do bring out the ‘End of the World of the Week’ -types across ye ole web, there’s actually a fascinating story of science at the core of next week’s eclipse and the challenge it poses to energy production.

But first, a brief recap of the eclipse itself. Dubbed the “Equinox Eclipse,” totality only occurs over a swath of the North Atlantic and passes over distant Faroe and Svalbard Islands. Germany and central Europe can expect an approximately 80% partially obscured Sun at the eclipse’s maximum.

Credit
The magnitude of the March 20th solar eclipse across Europe. Credit: Michael Zeiler/GreatAmericanEclipse.com

We wrote a full guide with the specifics for observing this eclipse yesterday. But is there a cause for concern when it comes to energy production?

A power grid is a huge balancing act.  As power production decreases from one source, other sources must be brought online to compensate. This is a major challenge — especially in terms of solar energy production.

Residential solar panels in Germany. Credit: Wikimedia Commons/ Sideka Solartechnik.
Residential solar panels in Germany. Credit: Wikimedia Commons/ Sideka Solartechnik.

Germany currently stands at the forefront of solar energy technology, representing a whopping quarter of all solar energy capacity installed worldwide. Germany now relies of solar power for almost 7% of its annual electricity production, and during the sunniest hours, has used solar panels to satisfy up to 50% of the country’s power demand.

We recently caught up with Barry Fischer to discuss the issue. Fischer is the Head Writer at Opower, a software company that uses data to help electric and gas utilities improve their customer experience. Based on Opower’s partnerships with nearly 100 utilities worldwide, the company has amassed  the world’s largest energy dataset of its kind which documents energy consumption patterns across more than 55 million households around the globe.

A study published last week by Opower highlights data from the partial solar eclipse last October over the western United States. There’s little historical precedent for the impact that an eclipse could have on the solar energy grid. For example, during the August 11th, 1999 total solar eclipse which crossed directly over Europe, less than 0.1% of utility electricity was generated using solar power.

Credit:
Looking at the drop in power production during the October 2014 solar eclipse. Credit: Opower.

What they found was intriguing. Although the 2014 partial solar eclipse only obscured 30 to 50% of the Sun, solar electric production dropped over an afternoon span of nearly three hours before returning to a normal pattern.

Examining data from 5,000 solar-powered homes in the western United States, Opower found that during the eclipse those homes sent 41% less electricity back to the grid than normal. Along with a nearly 1,000 megawatt decline in utility-scale solar power production, these drop-offs were compensated for by grid operators ramping up traditional thermal power plants that were most likely fueled by natural gas.

No serious problems were experienced during the October 23rd, 2014 partial solar eclipse in terms of solar electricity production in the southwestern United States, though it is interesting to note that the impact of the eclipse on solar energy production could be readily detected and measured.

Credit
The projected effect of the March 20th eclipse on solar power production. Credit: Opower.

How does the drop and surge in solar power output anticipated for the March 20th eclipse differ from, say, the kind presented by the onset of night, or a cloudy day? “The impact of an eclipse can register broadly – and unusually rapidly – across an entire region,” Fischer told Universe Today. On a small scale, one area many be cloudy, while on a larger regional scale, other areas of clear or partly sunny skies can compensate. An eclipse — even a partial one — is fundamentally different, because the sudden onset and the conclusion are relatively uniform over a large region.

The March 20th event offers an unprecedented chance to study the effects of an eclipse on large-scale solar production up close. A study (in German) by the University of Applied Sciences in Berlin suggests that solar power production will fall at a rate 2.7 times faster than usual as the eclipse progresses over a span of 75 minutes. This is the equivalent of switching off one medium-sized power plant per minute.

The anticipated slingshot might be just as challenging, as  18 gigawatts of power comes back online at the conclusion of the eclipse in just over an hour. And as opposed to the 2014 eclipse over the U.S. which ended towards sunset, the key rebound period for the March 20th eclipse will be around local noon and during a peak production time.

Fischer also noted that “the second half of the partial solar eclipse will also pose a notable challenge” for the grid, as it is flooded with solar power production 3.5 times faster than normal. This phenomenon could also serve as a great model for what could occur daily on a grid that’s increasingly solar power reliant in the future, as energy production ramps up daily at sunrise. Such a reality may be only 15 years away, as Germany projects installed solar capacity to top 66 gigawatts by 2030.

Credit:
The Crescent Dunes Solar Energy Project outside of Tonopah, Nevada. Credit:  Wikimedia Commons/Amble. Licensed under a CC BY-SA 4.0 license.

What’s the anticipated impact projected for a future eclipse such as, say, the 2017 and 2024 total solar eclipses over the U.S.?

This eclipse may serve as a great dry run for modeling what could occur as reliance on solar energy production grows.

Such is the modern technical society we live in. It’s fascinating to think that eclipses aren’t only a marvelous celestial spectacle, but their effects on power production may actually serve as a model for the smart grids of tomorrow.

 

 

 

Here’s a Better Use for Fighter Jets: Launching Satellites

Artist's impression of the ALASA being deployed by a USAF fighter jet. Credit: DARPA

For decades, the human race has been deploying satellites into orbit. And in all that time, the method has remained the same – a satellite is placed aboard a booster rocket which is then launched from a limited number of fixed ground facilities with limited slots available. This process not only requires a month or more of preparation, it requires years of planning and costs upwards of millions of dollars.

On top of all that, fixed launch sites are limited in terms of the timing and direction of orbits they can establish, and launches can be delayed by things as simple as bad weather.  As such, DARPA has been working towards a new method of satellite deployment, one which eliminates rockets altogether. It’s known as the Airborne Launch Assist Space Access (ALASA), a concept which could turn any airstrip into a spaceport and significantly reduce the cost of deploying satellites.

What ALASA comes down to is a cheap, expendable dispatch launch vehicle that can be mounted onto the underside of an aircraft, flown to a high altitude, and then launched from the craft into low earth orbit. By using the aircraft as a first-stage, satellite deployment will not only become much cheaper, but much more flexible.

DARPA’s aim in creating ALASA was to ensure a three-fold decrease in launch costs, but also to create a system that could carry payloads of up to 45 kg (100 lbs) into orbit with as little as 24 hours’ notice. Currently, small satellite payloads cost roughly $66,000 a kilogram ($30,000 per pound) to launch, and payloads often must share a launcher. ALASA seeks to bring that down to a total of $1 million per launch, and to ensure that satellites can be deployed more precisely.

Artist's concept of the ALASA second stage firing (Credit: DARPA)
Artist’s concept of the ALASA second stage firing. Credit: DARPA

News of the agency’s progress towards this was made at the 18th Annual Commercial Space Transportation Conference (Feb 4th and 5th) in Washington, DC. Bradford Tousley, the director of DARPA’s Tactical Technology Office, reported on the progress of the agency’s program, claiming that they had successfully completed phase one, which resulted in three viable system designs.

Phase two – which began in March of 2014 when DARPA awarded Boeing the prime contract for development – will consist of DARPA incorporating commercial-grade avionics and advanced composites into the design. Once this is complete, it will involve launch tests that will gauge the launch vehicle’s ability to deploy satellites to desired locations.

“We’ve made good progress so far toward ALASA’s ambitious goal of propelling 100-pound satellites into low earth orbit (LEO) within 24 hours of call-up, all for less than $1 million per launch,” said Tousley in an official statement. “We’re moving ahead with rigorous testing of new technologies that we hope one day could enable revolutionary satellite launch systems that provide more affordable, routine and reliable access to space.”

These technologies include the use of a high-energy monopropellant, where fuel and oxidizer are combined into a single liquid. This technology, which is still largely experimental, will also cut the costs associated with satellite launches by both simplifying engine design and reducing the cost of engine manufacture and operation.

Artisti's concept of the ALASA deploying into orbit. Credit: DARPA
Artist’s concept of the ALASA vehicle deploying into orbit. Credit: DARPA

Also, the ability to launch satellites from runways instead of fixed launch sites presents all kinds of advantages. At present, the Department of Defense (DoD) and other government agencies require scheduling years in advance because the number of slots and locations are very limited. This slow, expensive process is causing a bottleneck when it comes to deploying essential space assets, and is also inhibiting the pace of scientific research and commercial interests in space.

“ALASA seeks to overcome the limitations of current launch systems by streamlining design and manufacturing and leveraging the flexibility and re-usability of an air-launched system,” said Mitchell Burnside Clapp, DARPA program manager for ALASA. “We envision an alternative to ride-sharing for satellites that enables satellite owners to launch payloads from any location into orbits of their choosing, on schedules of their choosing, on a launch vehicle designed specifically for small payloads.”

The program began in earnest in 2011, with the agency conducting initial trade studies and market/business case analysis. In November of that same year, development began with both system designs and the development of the engine and propellant technologies. Phase 2 is planned to last late into 2015, with the agency conducting tests of both the vehicle and the monopropellant.

Pending a successful run, the program plan includes 12 orbital launches to test the integrated ALASA prototype system – which is slated to take place in the first half of 2016. Depending on test results, the program would conduct up to 11 further demonstration launches through the summer of 2016. If all goes as planned, ALASA would provide convenient, cost-effective launch capabilities for the growing government and commercial markets for small satellites, which are currently the fastest-growing segment of the space launch industry.

And be sure to check out this concept video of the ALASA, courtesy of DARPA:

Further Reading: DARPA TTO, DARPA News

How We’ve ‘Morphed’ From “Starry Night” to Planck’s View of the BICEP2 Field

New images returned by the Planck telescope (right) begin to rival the complexity and beauty of a great artists imagination - Starry Night.A visulization of the Planck data represents the interaction of interstellar dust with the galactic magnetic field. Color defines the intensity of dust emisions and the measurements of polarized light reveals the direction of the magnetic field lines. (Credits: Vincent Van Gogh, ESA)

From the vantage point of a window in an insane asylum, Vincent van Gogh painted one of the most noted and valued artistic works in human history. It was the summer of 1889. With his post-impressionist paint strokes, Starry Night depicts a night sky before sunrise that undulates, flows and is never settled. Scientific discoveries are revealing a Cosmos with such characteristics.

Since Vincent’s time, artists and scientists have taken their respective paths to convey and understand the natural world. The latest released images taken by the European Planck Space Telescope reveals new exquisite details of our Universe that begin to touch upon the paint strokes of the great master and at the same time looks back nearly to the beginning of time. Since Van Gogh – the passage of 125 years – scientists have constructed a progressively intricate and incredible description of the Universe.

New images returned by the Planck telescope (right) begin to rival the complexity and beauty of a great artists imagination - Starry Night.A visulization of the Planck data represents the interaction of interstellar dust with the galactic magnetic field. Color defines the intensity of dust emisions and the measurements of polarized light reveals the direction of the magnetic field lines. (Credits: Vincent Van Gogh, ESA)
New images returned by the Planck telescope (right) begin to rival the complexity and beauty of a great artists imagination – Starry Night.A visulization of the Planck data represents the interaction of interstellar dust with the galactic magnetic field. Color defines the intensity of dust emisions and the measurements of polarized light reveals the direction of the magnetic field lines. (Credits: Vincent Van Gogh, ESA)

The path from Van Gogh to the Planck Telescope imagery is indirect, an abstraction akin to the impressionism of van Gogh’s era. Impressionists in the 1800s showed us that the human mind could interpret and imagine the world beyond the limitations of our five senses. Furthermore, optics since the time of Galileo had begun to extend the capability of our senses.

A photograph of James Clerk Maxwell and a self-portrait of Vincent van Gogh. Maxwell's equations and impressionism in the fine arts in the 19th Century sparked an enhanced perception, expression and abstraction of the World and began a trek of knowledge and technology into the modern era. (Credit: National Gallery of Art, Public Domain)
A photograph of James Clerk Maxwell and a self-portrait of Vincent van Gogh. Maxwell’s equations and impressionism in the fine arts in the 19th Century sparked an enhanced perception, expression and abstraction of the World and began a trek of knowledge and technology into the modern era. (Credit: National Gallery of Art, Public Domain)

Mathematics is perhaps the greatest form of abstraction of our vision of the World, the Cosmos. The path of science from the era of van Gogh began with his contemporary, James Clerk Maxwell who owes inspiration from the experimentalist Michael Faraday. The Maxwell equations mathematically define the nature of electricity and magnetism. Since Maxwell, electricity, magnetism and light have been intertwined. His equations are now a derivative of a more universal equation – the Standard Model of the Universe. The accompanying Universe Today article by Ramin Skibba describes in more detail the new findings by Planck Mission scientists and its impact on the Standard Model.

The work of Maxwell and experimentalists such as Faraday, Michelson and Morley built an overwhelming body of knowledge upon which Albert Einstein was able to write his papers of 1905, his miracle year (Annus mirabilis). His theories of the Universe have been interpreted, verified time and again and lead directly to the Universe studied by scientists employing the Planck Telescope.

The first Solvay Conference in 1911 was organized by Max Planck and Hendrik Lorentz. Planck is standing, second from left. The first Solvay, by invitation only, included most of the greatest scientists of the early 20th Century. While Planck is known for his work on quanta, the groundwork for quantum theory - the Universe in minutiae , the Planck telescope is surveying the Universe in the large. Physicists are closer to unifying the nature of the two extremes. Insets - Planck (1933, 1901).
The first Solvay Conference in 1911 was organized by Max Planck and Hendrik Lorentz. Planck is standing, second from left. The first Solvay, by invitation only, included most of the greatest scientists of the early 20th Century. While Planck is known for his work on quanta, the groundwork for quantum theory – the Universe in minutiae , the Planck telescope is surveying the Universe in the large. Physicists are closer to unifying the nature of the two extremes. Insets – Planck (1933, 1901).

In 1908, the German physicist Max Planck, for whom the ESA telescope is named, recognized the importance of Einstein’s work and finally invited him to Berlin and away from the obscurity of a patent office in Bern, Switzerland.

As Einstein spent a decade to complete his greatest work, the General Theory of Relativity, astronomers began to apply more powerful tools to their trade. Edwin Hubble, born in the year van Gogh painted Starry Night, began to observe the night sky with the most powerful telescope in the World, the Mt Wilson 100 inch Hooker Telescope. In the 1920s, Hubble discovered that the Milky Way was not the whole Universe but rather an island universe, one amongst billions of galaxies. His observations revealed that the Milky Way was a spiral galaxy of a form similar to neighboring galaxies, for example, M31, the Andromeda Galaxy.

Pablo Picasso and Albert Einstein were human wrecking balls in their respective professions. What began with Faraday and Maxwell, van Gogh and Gaugin were taken to new heights. We are encapsulated in the technology derived from these masters but are able to break free of the confinement technology can impose through the expression and art of Picasso and his contemporaries.
Pablo Picasso and Albert Einstein were human wrecking balls in their respective professions. What began with Faraday and Maxwell, van Gogh and Gaugin were taken to new heights. We are encapsulated in the technology derived from these masters but are able to break free of the confinement technology can impose through the expression and art of Picasso and his contemporaries.

Einstein’s equations and Picasso’s abstraction created another rush of discovery and expressionism that propel us for another 50 years. Their influence continues to impact our lives today.

The Andromeda Galaxy, M31, the nearest spiral galaxy to the Milky Way, several times the angular size of the Moon. First photographed by Isaac Roberts, 1899 (inset), spirals are a function of gravity and the propagation of shock waves, across the expanses of such galaxies are electromagnetic fields such as reported by Planck mission scientists.
The Andromeda Galaxy, M31, the nearest spiral galaxy to the Milky Way, several times the angular size of the Moon. First photographed by Isaac Roberts, 1899 (inset), spirals are a function of gravity and the propagation of shock waves, across the expanses of such galaxies are electromagnetic fields such as reported by Planck mission scientists.

Telescopes of Hubble’s era reached their peak with the Palomar 200 inch telescope, four times the light gathering power of Mount Wilson’s. Astronomy had to await the development of modern electronics. Improvements in photographic techniques would pale in comparison to what was to come.

The development of electronics was accelerated by the pressures placed upon opposing forces during World War II. Karl Jansky developed radio astronomy in the 1930s which benefited from research that followed during the war years. Jansky detected the radio signature of the Milky Way. As Maxwell and others imagined, astronomy began to expand beyond just visible light – into the infrared and radio waves. Discovery of the Cosmic Microwave Background (CMB) in 1964 by Arno Penzias and Robert Wilson is arguably the greatest discovery  from observations in the radio wave (and microwave) region of the electromagnetic spectrum.

From 1937 to the present day, radio astronomy has been an ever refining merger of electronics and optics. Karl Jansky's first radio telescope, 1937 (inset) and the great ALMA array now in operation studying the Universe in the microwave region of the electromagnetic spectrum. (Credits: ESO)
From 1937 to the present day, radio astronomy has been an ever refining merger of electronics and optics. Karl Jansky’s first radio telescope, 1937 (inset) and the great ALMA array now in operation studying the Universe in the microwave region of the electromagnetic spectrum. (Credits: ESO)

Analog electronics could augment photographic studies. Vacuum tubes led to photo-multiplier tubes that could count photons and measure more accurately the dynamics of stars and the spectral imagery of planets, nebulas and whole galaxies. Then in the 1947, three physicists at Bell Labs , John Bardeen, Walter Brattain, and William Shockley created the transistor that continues to transform the World today.

For astronomy and our image of the Universe, it meant more acute imagery of the Universe and imagery spanning across the whole electromagnetic spectrum. Infrared Astronomy developed slowly beginning in the 1800s but it was solid state electronics in the 1960s when it came of age. Microwave or Millimeter Radio Astronomy required a marriage of radio astronomy and solid state electronics. The first practical millimeter wave telescope began operations in 1980 at Kitt Peak Observatory.

A early work of Picasso (center), the work at Bell Labs of John Bardeen, Walter Brattain, and William Shockley and the mobile art of Alexander Calder. As artists attempt to balance color and shape, the Bell Lab engineers balanced electrons essentially on the head of a pin, across junctions to achieve success and create the first transistor.
An early work of Picasso (center), the work at Bell Labs of John Bardeen, Walter Brattain, and William Shockley and the mobile art of Alexander Calder. As artists attempt to balance color and shape, the Bell Lab engineers balanced electrons essentially on the head of a pin, across junctions to create the first transistor.

With further improvements in solid state electronics and development of extremely accurate timing devices and development of low-temperature solid state electronics, astronomy has reached the present day. With modern rocketry, sensitive devices such as the Hubble and Planck Space Telescopes have been lofted into orbit and above the opaque atmosphere surrounding the Earth.

In 1964, the Cosmic Microwave Background (CMD) was discovered. In the early 1990s, the COBE space telescope even more detailed results. Planck has refined and expanded  upon IRAS, COBE and BICEP observations. (Photo Credits: ESA)
In 1964, the Cosmic Microwave Background (CMB) was discovered. In the early 1990s, the COBE space telescope returned even more detailed results and now Planck has refined and expanded upon IRAS, COBE and BICEP observations of the CMB. Inset, first light observations of the Planck mission. (Photo Credits: ESA)

Astronomers and physicists now probe the Universe across the whole electromagnetic spectrum generating terabytes of data and abstractions of the raw data allow us to look out into the Universe with effectively a sixth sense, that which is given to us by 21st century technology. What a remarkable coincidence that the observations of our best telescopes peering through hundreds of thousands of light years, even more so, back 13.8 billion years to the beginning of time, reveal images of the Universe that are not unlike the brilliant and beautiful paintings of a human with a mind that gave him no choice but to see the world differently.

Now 125 years later, this sixth sense forces us to see the World in a similar light. Peer up into the sky and you can imagine the planetary systems revolving around nearly every star, swirling clouds of spiral galaxies, one even larger in the sky than our Moon, and waves of magnetic fields everywhere across the starry night.

Consider what the Planck Mission is revealing, questions it is answering and new ones it is raising – It Turns Out Primordial Gravitational Waves Weren’t Found.

Making the Trip to Mars Cheaper and Easier: The Case for Ballistic Capture

How long does it take to get to Mars
A new proposal for sending craft to Mars could save money and offer more flexible launch windows. Credit: NASA

When sending spacecraft to Mars, the current, preferred method involves shooting spacecraft towards Mars at full-speed, then performing a braking maneuver once the ship is close enough to slow it down and bring it into orbit.

Known as the “Hohmann Transfer” method, this type of maneuver is known to be effective. But it is also quite expensive and relies very heavily on timing. Hence why a new idea is being proposed which would involve sending the spacecraft out ahead of Mars’ orbital path and then waiting for Mars to come on by and scoop it up.

This is what is known as “Ballistic Capture”, a new technique proposed by Professor Francesco Topputo of the Polytechnic Institute of Milan and Edward Belbruno, a visiting associated researcher at Princeton University and former member of NASA’s Jet Propulsion Laboratory.

In their research paper, which was published in arXiv Astrophysics in late October, they outlined the benefits of this method versus traditional ones. In addition to cutting fuel costs, ballistic capture would also provide some flexibility when it comes to launch windows.

MAVEN was launched into a Hohmann Transfer Orbit with periapsis at Earth's orbit and apoapsis at the distance of the orbit of Mars. Credit: NASA
MAVEN was launched into a Hohmann Transfer Orbit with periapsis at Earth’s orbit and apoapsis at the distance of the orbit of Mars. Credit: NASA

Currently, launches between Earth and Mars are limited to period where the rotation between the two planets is just right. Miss this window, and you have to wait another 26 months for a new one to come along.

At the same time, sending a rocket into space, through the vast gulf that separates Earth’s and Mars’ orbit, and then firing thrusters in the opposite direction to slow down, requires a great deal of fuel. This in turn means that the spacecraft responsible for transporting satellites, rovers, and (one day) astronauts need to be larger and more complicated, and hence more expensive.

As Belbruno told Universe Today via email:  “This new class of transfers is very promising for giving a new approach to future Mars missions that should lower cost and risk.  This new class of transfers should be applicable to all the planets. This should give all sorts of new possibilities for missions.”

The idea was first proposed by Belbruno while he was working for JPL, where he was trying to come up with numerical models for low-energy trajectories. “I first came up with the idea of ballistic capture in early 1986 when working on a JPL study called LGAS (Lunar Get Away Special),” he said. “This study involved putting a tiny 100 kg solar electric spacecraft in orbit around the Moon that was first ejected from a Get Away Special Canister on the Space Shuttle.”

The Hiten spacecraft, part of the MUSES Program, was built by the Institute of Space and Astronautical Science of Japan and launched on January 24, 1990. It was Japan's first lunar probe. Credit: JAXA
The Hiten spacecraft, built by the Institute of Space and Astronautical Science of Japan, was Japan’s first lunar probe. Credit: JAXA

The test of the LGAS was not a resounding success, as it would be two years before it got to the Moon. But in 1990, when Japan was looking to rescue their failed lunar orbiter, Hiten, he submitted proposals for a ballistic capture attempt that were quickly incorporated into the mission.

“The time of flight for this one was 5 months,” he said. “It was successfully used in 1991 to get Hiten to the Moon.” And since that time, the LGAS design has been used for other lunar missions, including the ESA’s SMART-1 mission in 2004 and NASA’s GRAIL mission in 2011.

But it is in future missions, which involve much greater distances and expenditures of fuel, that Belbruno felt would most benefit from this method. Unfortunately, the idea met with some resistance, as no missions appeared well-suited to the technique.

“Ever since 1991 when Japan’s Hiten used the new ballistic capture transfer to the Moon, it was felt that finding a useful one for Mars was not possible due to Mars much longer distance and its high orbital velocity about the Sun. However, I was able to find one in early 2014 with my colleague Francesco Topputo.”

Artist's impression of India’s Mars Orbiter Mission (MOM). Credit: ISRO
India’s Mars Orbiter Mission (MOM) was one of the most successful examples of the Hohmann Transfer method. Credit: ISRO

Granted, there are some drawbacks to the new method. For one, a spacecraft sent out ahead of Mars’ orbital path would take longer to get into orbit than one that slows itself down to establish orbit.

In addition, the Hohmann Transfer method is a time-tested and reliable one. One of the most successful applications of this maneuver took place back in September, when the Mars Orbiter Mission (MOM) made its historic orbit around the Red Planet. This not only constituted the first time an Asian nation reached Mars, it was also the first time that any space agency had achieved a Mars orbit on the first try.

Nevertheless, the possibilities for improvements over the current method of sending craft to Mars has people at NASA excited. As James Green, director of NASA’s Planetary Science Division, said in an interview with Scientific American: “It’s an eye-opener. This [ballistic capture technique] could not only apply here to the robotic end of it but also the human exploration end.”

Don’t be surprised then if upcoming missions to Mars or the outer Solar System are performed with greater flexibility, and on a tighter budget.

Further Reading: arXiv Astrophysics

Elon Musk’s Hyperloop Might Become A Reality After All

Concept art for the Hyperloop high-speed train. Credit: Reuters

Fans of Elon Musk and high-speed transit are sure to remember the Hyperloop. Back in 2013, Musk dropped the idea into the public mind with a paper that claimed that using the right technology, a high-speed train could make the trip from San Fransisco to Los Angeles in just 35 minutes.

However, Musk also indicated that he was too busy to build such a system, but that others were free to take a crack at it. And it seems that a small startup from El Segundo, California is prepared to do just that.

That company is JumpStartFund, a startup that combines elements of crowdfunding and crowd-sourcing to make innovation happen. Dirk Ahlborn, the CEO of JumpStartFund, believes they can build Musk’s vision of a solar-powered transit system that would transport people at up to speeds of 1280 km/h (800 mph).

Together with SpaceX, JumpStartFund has created a subsidiary called Hyperloop Transportation Technologies (HTT), Inc. to oversee all the necessary components to creating the system. This included bringing together 100 engineers from all over the country who work for such giants of industry as Boeing, NASA, Yahoo!, Airbus, SpaceX, and Salesforce.

Concept art of what a completed Hyperloop would look like amidst the countryside. Credit: HTT/JumpStartFund
Concept art of what a completed Hyperloop would look like amidst the countryside. Credit: HTT/JumpStartFund

Last week, these engineers came together for the first time to get the ball rolling, and what they came up with a 76-page report (entitled “Crowdstorm”) that spelled out exactly how they planned to proceed. By their own estimates, they believe they can complete the Hyperloop in just 10 years, and at a cost of $16 billion.

A price tag like that would be sure to scare most developers away. However, Ahlborn is undeterred and believes that all obstacles, financial or otherwise, can be overcome. As he professed in an interview with Wired this week: “I have almost no doubt that once we are finished, once we know how we are going to build and it makes economical sense, that we will get the funds.”

The HTT report also covered the basic design and engineering principles that would go into the building of the train, as Musk originally proposed it. Basically, this consists of pods cars that provide their own electricity through solar power, and which are accelerated through a combination of linear induction motors and low air pressure.

Much has been made of this latter aspect of the idea, and has often compared to the kinds of pneumatic tubes that used to send messages around office buildings in the mid-20th century. But of course, what is called for with the Hyperloop is bit more sophisticated.

Concept art showing different "classes" for travel. Credit: HTT
Concept art showing different “classes” for travel, which would include business class for those who can afford it. Credit: HTT/JumpStartFund

Basically, the Hyperloop will operate by providing each capsule with a soft air cushion to float on, avoiding direct contact with rails or the tube, while electromagnetic induction is used to speed up or slow the capsules down, depending on where they are in the transit system.

However, the HTT engineers indicated that such a system need not be limited to California. As it says in the report: “While it would of course be fantastic to have a Hyperloop between LA and SF as originally proposed, those aren’t the only two cities in the US and all over the world that would seriously benefit from the Hyperloop. Beyond the dramatic increase in speed and decrease in pollution, one of the key advantages the Hyperloop offers over existing designs for high-speed rail is the cost of construction and operations.”

The report also indicated the kind of price bracket they would be hoping to achieve. As it stands, HTT’s goal is “to keep the ticket price between LA and SF in the $20-$30 range,” with double that amount for return tickets. But with an overall price tag of $16 billion, the report also makes allowances for going higher: “[Our] current projected cost is closer to $16 billion,” they claim, “implying a need for a higher ticket price, unless the loop transports significantly more than 7.4 million annually, or the timeline for repayment is extended.”

In addition, the report also indicates that they are still relying heavily on Musk’s alpha document for much of their cost assessment. As a result, they can’t be specific on pricing or what kinds of revenues the Hyperloop can be expected to generate once its up and running.

The Hyperloop, as originally conceived within Musk's alpha document. Credit: Tesla Motors
The Hyperloop, as originally conceived within Musk’s alpha document. Credit: Tesla Motors

Also, there’s still plenty of logistical issues that need to be worked out, not to mention the hurdles of zoning, local politics and environmental assessments. Basically, HTT can look forward to countless challenges before they even begin to break ground. And since they are depending on crowdfunding to raise the necessary funds, it is not even certain whether or not they will be able to meet the burden of paying for it.

However, both Ahlborn and the HTT engineering team remain optimistic. Ahlborn believes the financial hurdles will be overcome, and if there was one thing that came through in the team’s report, it was the belief that something like the Hyperloop needs to happen in the near future. As the  team wrote in the opening section of “Crowdstorm”:

“It quickly becomes apparent just how dramatically the Hyperloop could change transportation, road congestion and minimize the carbon footprint globally. Even without naming any specific cities, it’s apparent that the Hyperloop would greatly increase the range of options available to those who want to continue working where they do, but don’t wish to live in the same city, or who want to live further away without an unrealistic commute time; solving some of the major housing issues some metropolitan areas are struggling with.”

Only time will tell if the Hyperloop will become the “fifth mode of transportation” (as Musk referred to it initially) or just a pipe-dream. But when it was first proposed, it was clear that what the Hyperloop really needed was someone who believed in it and enough money to get it off the ground. As of now, it has the former. One can only hope the rest works itself out with time.

Further Reading: JumpStartFund, SpaceX/Hyperloop, Crowdstorm

A History of Launch Failures: “Not Because They are Easy, but Because They are Hard”

The Rice Speech words hold especially true when the NASA's goals seem challenged and suddenly not so close at hand. (Photo Credit: NASA)

Over the 50-plus years since President John F. Kennedy’s Rice University speech, spaceflight has proven to be hard. It doesn’t take much to wreck a good day to fly.

Befitting a Halloween story, rocket launches, orbital insertions, and landings are what make for sleepless nights. These make-or-break events of space missions can be things that go bump in the night: sometimes you get second chances and sometimes not. Here’s a look at some of the past mission failures that occurred at launch. Consider this a first installment in an ongoing series of articles – “Not Because They Are Easy.”

A still image from one of several videos of the ill-fated Antares launch of October 28, 2014, taken by engineers at the Mid-Atlantic Regional Spaceport, Wallops, VA. (Credit: NASA)
A still image from one of several videos of the ill-fated Antares launch of October 28, 2014, taken by engineers at the Mid-Atlantic Regional Spaceport, Wallops, VA. (Credit: NASA)

The evening of October 28, 2014, was another of those hard moments in the quest to explore and expand humanity’s presence in space. Ten years ago, Orbital Sciences Corporation sought an engine to fit performance requirements for a new launch vehicle. Their choice was a Soviet-era liquid fuel engine, one considered cost-effective, meeting requirements, and proving good margins for performance and safety. The failure of the Antares rocket this week could be due to a flaw in the AJ-26 or it could be from a myriad of other rocket parts. Was it decisions inside NASA that cancelled or delayed engine development programs and led OSC and Lockheed-Martin to choose “made in Russia” rather than America?

Here are other unmanned launch failures of the past 25 years:

Falcon 1, Flight 2, March 21, 2007. Fairings are hard. There are fairings that surround the upper stage engines and a fairing covering payloads.  Fairings must not only separate but also not cause collateral damage. The second flight of the Falcon 1 is an example of a 1st stage separation and fairing that swiped the second stage nozzle. Later, overcompensation by the control system traceable to the staging led to loss of attitude control; however, the launch achieved most of its goals and the mission was considered a success. (View: 3:35)

Proton M Launch, Baikonur Aerodrome, July 2, 2013. The Proton M is the Russian Space program’s workhorse for unmanned payloads. On this day, the Navigation, Guidance, and Control System failed moments after launch. Angular velocity sensors of the guidance control system were installed backwards. Fortunately, the Proton M veered away from its launch pad sparing it damage.

Ariane V Maiden Flight, June 4, 1996. The Ariane V was carrying an ambitious ESA mission called Cluster – a set of four satellites to fly in tetrahedral formation to study dynamic phenomena in the Earth’s magnetosphere. The ESA launch vehicle reused flight software from the successful Ariane IV. Due to differences in the flight path of the Ariane V, data processing led to a data overflow – a 64 floating point variable overflowing a 16 bit integer. The fault remained undetected and flight control reacted in error. The vehicle veered off-course, the structure was stressed and disintegrated 37 seconds into flight. Fallout from the explosion caused scientists and engineers to don protective gas masks. (View: 0:50)

Delta II, January 17, 1997. The Delta II is one of the most successful rockets in the history of space flight, but not on this day. Varied configurations change up the number of solid rocket motors strapped to the first stage. The US Air Force satellite GPS IIR-1 was to be lifted to Earth orbit, but a Castor 4A solid rocket booster failed seconds after launch. A hairline fracture in the rocket casing was the fault. Both unspent liquid and solid fuel rained down on the Cape, destroying launch equipment, buildings, and even parked automobiles. This is one of the most well documented launch failures in history.

Compilation of Early Launch Failures. Beginning with several of the early failures of Von Braun’s V2, this video compiles many failures over a 70 year period. The early US space program endured multiple launch failures as they worked at a breakneck speed to catch up with the Soviets after Sputnik. NASA did not yet exist. The Air Force and Army had competing designs, and it was the Army with the German rocket scientists, including Von Braun, that launched the Juno 1 rocket carrying Explorer 1 on January 31, 1958.

One must always realize that while spectacular to launch viewers, a rocket launch has involved years of development, lessons learned, and multiple revisions. The payloads carried involve many hundreds of thousands of work-hours. Launch vehicle and payloads become quite personal. NASA and ESA have offered grief counseling to their engineers after failures.

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.

Kennedy’s Rice University Speech, September 12, 1962

Making Cubesats do Astronomy

Will cubesats develop a new technological branch of astronomy? Goddard engineers are taking the necessary steps to make cubesat sized telescopes a reality. (Credit: NASA, UniverseToday/TRR)

One doesn’t take two cubesats and rub them together to make static electricity. Rather, you send them on a brief space voyage to low-earth orbit (LEO) and space them apart some distance and voilà, you have a telescope. That is the plan of NASA’s Goddard Space Flight Center engineers and also what has been imagined by several others.

Cubesats are one of the big crazes in the new space industry. But nearly all that have flown to-date are simple rudderless cubes taking photos when they are oriented correctly. The GSFC engineers are planning to give two cubes substantial control of their positions relative to each other and to the Universe surrounding them. With one holding a telescope and the other a disk to blot out the bright sun, their cubesat telescope will do what not even the Hubble Space Telescope is capable of and for far less money.

Semper (left), Calhoun, and Shah are advancing the technologies needed to create a virtual telescope that they plan to demonstrate on two CubeSats. (Image/Caption Credit: NASA/W. Hrybyk)
Semper (left), Calhoun, and Shah are advancing the technologies needed to create a virtual telescope that they plan to demonstrate on two CubeSats. (Image/Caption Credit: NASA/W. Hrybyk)

The 1U, the 3U, the 9U – these are all cubesats of different sizes. They all have in common the unit size of 1. A 1U cubesat is 10 x 10 x 10 centimeters cubed. A cube of this size will hold one liter of water (about one quart) which is one kilogram by weight. Or replace that water with hydrazine and you have very close to 1 kilogram of mono-propellent rocket fuel which can take a cubestat places.

GSFC aerospace engineers, led by Neerav Shah, don’t want to go far, they just want to look at things far away using two cubesats. Their design will use one as a telescope – some optics and a good detector –and the other cubesat will stand off about 20 meters, as they plan, and function as a coronagraph. The coronagraph cubesat will function as a sun mask, an occulting disk to block out the bright rays from the surface of the Sun so that the cubesat telescope can look with high resolution at the corona and the edge of the Sun. To these engineers, the challenge is keeping the two cubesats accurately aligned and pointing at their target.

Only dedicated Sun observing space telescopes such as SDO, STEREO and SOHO are capable of blocking out the Sun, but their coronagraphs are limited. Separating the coronagraph farther from the optics markedly improves how closely one can look at the edge of a bright object. With the corongraph mask closer to the optics, more bright light will still reach the optics and detectors and flood out what you really want to see. The technology Shah and his colleagues develop can be a pathfinder for future space telescopes that will search for distant planets around other stars – also using a coronagraph to reveal the otherwise hidden planets.

The engineers have received a $8.6-million investment from the Defense Advanced Research Project Agency (DARPA) and are working in collaboration with the Maryland-based Emergent Space Technologies.

An example of a 3U cubesat - 3 1U cubes stacked. This cubesat size  could function as the telescope of a two cubesat telescope system. It could be a simple 10 cm diameter optic system or use fancier folding optics to improve its resolving power. (Credit: LLNL)
An example of a 3U cubesat – 3 1U cubes stacked. This cubesat size could function as the telescope of a two cubesat telescope system. It could be a simple 10 cm diameter optic system or use fancier folding optics to improve its resolving power. (Credit: LLNL)

The challenge of GSFC engineers is giving two small cubesats guidance, navigation, and control (GN&C) as good as any standard spacecraft that has flown. They plan on using off-the-shelf technology and there are many small and even large companies developing and selling cubesat parts.

This is a sorting out period for the cubesat sector, if you will, of the new space industry. Sorting through the off-the-shelf components, the GSFC engineers led by Shah will pick the best in class. The parts they need are things like tiny sun sensors and star sensors, laser beams and tiny detectors of those beams, accelerometers, tiny gyroscopes or momentum wheels and also small propulsion systems. The cubesat industry is pretty close to having all these ready as standard issue. The question then is what do you do with tiny satellites in low-Earth orbit (LEO). Telescopes for earth-observing are already making headway and scopes for astronomy are next. There are also plans to venture out to interplanetary space with tiny and capable cubesat space probes.

Whether one can sustain a profit for a company built on cubesats remains a big question. Right now those building cubesats to customer specs are making a profit and those making the tiny picks and shovels for cubesats are making profits. The little industry may be overbuilt which in economic parlance might be only natural. Many small startups will fail. However, for researchers at universities and research organizations like NASA, cubesats have staying power because they reduce cost by their low mass and size, and the low cost of the components to make them function. The GSFC effort will determine how quickly cubesats begin to do real work in the field of astronomy. Controlling attitude and adding propulsion is the next big thing in cubesat development.

References:

NASA Press Release

Balloon launcher Zero2Infinity Sets Its Sights to the Stars

Zero2Infinity announced on October 15, their plans to begin micro-satellite launches to low-earth orbit by 2017. (Credit: OIIOO)

Clearly, the sky is not the limit for balloon launcher Zero2Infinity. Based in Barcelona, Spain, the company announced this week their plans to launch payloads to orbit using a balloon launch system. The Rockoon is a portmanteau, as Lewis Carroll would have said: the blend of the words rocket and balloon.

The launch system announced by the company is called Bloostar. The Rockoon system begins with a balloon launch to stratospheric altitudes followed by the igniting of a 3 stage rocket to achieve orbit. The Rockoon concept is not new. Dr. James Van Allen with support from the US Navy developed and launched the first Rockoons in 1949. Those were just sounding rockets, Bloostar will take payloads to low-earth orbit and potentially beyond.

The Zero2Infinity Bloostar launch vehicle. Three stages will use a set of liquid fuel engines clustered as concentric toroids. (Photo Credit: 0II00)
The Zero2Infinity Bloostar launch vehicle. Three stages will use a set of liquid fuel engines clustered as concentric toroids. (Photo Credit: 0II00)

The advantage of rocket launch from a balloon is that it takes the Earth’s atmosphere out as a factor in design and as a impediment to reaching orbit. The first phase of the Bloostar system takes out 99% of the Earth’s atmosphere by reaching an altitude of over 20 km (>65,000 feet). Aerodynamics is not a factor so the stages are built out rather than up. The stages of the Bloostar design are a set of concentric rings which are sequentially expended as it ascends to orbit.

Zero2Infinity is developing a liquid fuel engine that they emphasize is environmentally friendly. The first stage firing of Bloostar will last 160 seconds, reach 250 km of altitude and an inertial speed of 3.7 km/s. This is about half the velocity necessary for reach a stable low earth orbit. The second stage will fire for 230 seconds and achieve an altitude of 530 km with velocity of 5.4 km/s. The 3rd and final stage motor will fire at least twice with a coast period to achieve the final orbit. Zero2Infinity states that their Bloostar system will be capable of placing a 75kg (165 lbs) payload into a 600 km (372 mi) sun-synchronous orbit. In contrast, the International Space Station orbits at 420 km (260 mi) altitude.

The Bloostar launch phases. Zero2Infinity intends to de-orbit the final stage to minimize their contribution to the growing debris field in low-earth orbit. Their plans are to launch from a ship at sea. (Photo Credit: 0II00)
The Bloostar launch phases. Zero2Infinity intends to de-orbit the final stage to minimize their contribution to the growing debris field in low-earth orbit. Their plans are to launch from a ship at sea. (Photo Credit: 0II00)

For the developing cubesat space industry, a 75 kg payload to orbit is huge. A single cubesat 10x10x10 cm (1U) will typically weigh about 1 kg so Bloostar would be capable of launching literally a constellation of cubesats or in the other extreme, a single micro-satellite with potentially its own propulsion system to go beyond low-earth orbit.

The Rockoon concept is not unlike what Scaled Composites undertakes with a plane and rocket. Their Whiteknight planes lift the SpaceShips to 50,000 feet for takeoff whereas the Zero2Infinity balloon will loft Bloostar to 65,000 feet or higher. The increased altitude of the balloon launch reduces the atmospheric density to half of what it is at 50,000 feet and altogether about 8% of the density at sea level.

The act of building and launching a stratospheric balloon to 30 km (100,000 feet) altitude with >100 kg instrument payloads is a considerable accomplishment. This is just not the releasing of a balloon but involves plenty of logistics and telecommunications with instrumentation and also the returning of payloads safely to Earth. This is clearly half of what is necessary to reach orbit.

Bloostar is blazing new ground in Spain. The ground tests of their liquid fuel rocket engine are the first of its kinds in the country. Zero2Infinity began launching balloons in 2009. The founder and CEO, Jose Mariano Lopez-Urdiales is an aeronautical engineer educated in Spain with R&D experience involving ESA, MIT and Boeing. He has speerheaded organizations and activities in his native Spain. In 2002 he presented to the World Space Congress in Houston, the paper “The Role of Balloons in the Future Development of Space Tourism”.

References:

Zero2Infinity Press Release

Bloostar Launch Cycle