It’s funny to think that your smartphone might be faster than a new spaceship, but that’s what one report is saying about the Orion spacecraft. The computers are less-than-cutting-edge, the processors are 12 years old, and the speed at which it “thinks” is … slow, at least compared to a typical laptop today.
But according to NASA, there’s good reasoning behind using older equipment. In fact, it’s common for the agency to use this philosophy when designing missions — even one such as Orion, which saw the spacecraft soar 3,600 miles (roughly 5,800 kilometers) above Earth in an uncrewed test last week and make the speediest re-entry for a human spacecraft since the Apollo years.
The reason, according to a Computer World report, is to design the spacecraft for reliability and being rugged. Orion — which soared into the radiation-laden Van Allen belts above Earth — needs to withstand that environment and protect humans on board. The computer is therefore based on a well-tested Honeywell system used in 787 jetliners. And Orion in fact carries three computers to provide redundancy if radiation causes a reset.
“The one thing we really like about this computer is that it doesn’t get destroyed by radiation,” said Matt Lemke, NASA’s deputy manager for Orion’s avionics, power and software team, in the report. “It can be upset, but it won’t fail. We’ve done a lot of testing on the different parts of the computer. When it sees radiation, it might have to reset, but it will come back up and work again.”
A 2013 NASA presentation points out that the agency is a common user of commercial off-the-shelf (COTS) electronics. This usually happens for three reasons: officials can’t find military or aerospace alternatives, unknown risks are a part of the mission, or a mission has “a short lifetime or benign space environment exposure”. NASA makes sure to test the electronics beyond design limits and will often make accommodations to make it even safer. Ideally, the use of proven hardware overall reduces risk and cost for a mission, if used properly.
“The more understanding you have of a device’s failure modes and causes, the higher the confidence level that it will perform under mission environments and lifetime,” the presentation says. “Qualification processes are statistical beasts
designed to understand/remove known reliability risks and uncover unknown risks inherent in a part.”
In fact, the rocket that is eventually supposed to pair up with Orion will also use flight-tested systems for at least the first few flights. The Space Launch System, which NASA hopes will heft Orion on the next test flight in 2017 or 2018, will use solid rocket boosters based on those used with the shuttle. But NASA adds that upgrades are planned to the technology, which flew on shuttle missions in space starting in 1981.
“Although similar to the solid rocket boosters that helped power the space shuttle to orbit, the five-segment SLS boosters include several upgrades and improvements implemented by NASA and ATK engineers,” NASA wrote in a 2012 press release. “In addition, the SLS boosters will be built more affordably and efficiently than shuttle boosters, incorporating new and innovative processes and technologies.”
A handful of other prominent space recycling uses in space exploration:
- RapidScat (a new Earth observation platform on the International Space Station that re-uses materials designed for QuikScat);
- Curiosity Mars rover’s MastCam (which is based on a successful design used in the Spirit and Opportunity rovers). The earlier version of MastCam has been working on Opportunity since the rover landed on Mars in January 2004.
- Venus Express, a European Space Agency mission that uses designs and hardware from the Mars Express and Rosetta missions. It’s finishing its mission soon after eight years in orbit — four times the original plan.
It would take just one massive Coronal Mass Ejection from the Sun to hit Earth the way one must have during the “Carrington Event” of 1869 that brought down telegraph services all over to render many, if not most of our consumer grade electronic devices into doorstops and paperweights. When I worked on my small part of the Mini-TES instrument destined for use on Spirit and Opportunity, I marveled at how simple it was compared to a smaller point-and-shoot digital camera I was using to record some of the testing steps we were doing on it. Mass produced gadgets are marvelous, but to survive a trip to deep space often requires extensive and intensive testing of some of the off-the-shelf components we used, even before we assembled them into the final flight article.
Unlike an EMP, a CME only causes damage to very long transmission lines, and the stuff hooked up to them. They also move at a comparative snail’s pace. With the several days warning we’d have of a massive CME, we could figure out what part of the planet it was planning to hit, and decouple (physically severe) the long lines that were going into expensive, difficult to replace hardware (transformers, power plants, etc). Once the event passed, it would take mere days to fully repair those severed lines (and only hours to repair the most critical ones)… as opposed to the months (or even years) that it would take to manufacture and replace the transformers if we allowed them to be blown out.
An exceptionally large CME would be a bit expensive, but with even the most modest preparation it wouldn’t be any more expensive to a country than a large tornado spawning thunder storm. It’s only with complete inaction that a CME would be devastating.
I doubt that Space X uses twelve year old processors on their ships.
http://aviationweek.com/blog/dragons-radiation-tolerant-design
Dragon uses the same concept, and obviously so
Great answers given on that site – basically the advantages of NOT going with the very limiting rad-hardened equipment, and going for a rad-hardened DESIGN instead which is able to handle and quickly repair any rad effects on the equipment or programming. I especially appreciated the remarks regarding watching the hits count up when the Shuttle was out there repairing the Hubble, and how the design kept repairing those hits, proving the effectiveness of this approach. I also notice that Space X, while on the cutting edge of this business, prefers the time tested and most fully understood strengths and weaknesses of the older processors, rather than being rudely surprised by some problem in a cutting edge processor that was not understood at the time it was incorporated into their cutting edge system. ThanX for pointing out this article.
Thanks, now I know, and it makes sense. I had hoped someone would know the answer. 😉
Always wonder why people think the well known old chips aren’t good enough to do space flights.
Is it that it’s beyond imagination that nowadays pc are hugely over oversized? You don’t need n-core-hyper-duper stuff up there. There is no flash and javascript crap that stresses your CPU, there are no 3D games to play and HD videos to edit in space.
And no one sane in the business would use a bleeding edge hardware, that hasn’t proof to function flawlessly here on earth for 5 years in a row, on a space mission that lasts some years, where no service technican can change faulty parts…..
On the LCROSS mission (and for its sister mission LRO) we used a processor from BAE known as a RAD-750 – a 100Mhz PowerPC 604. For LCROSS we only used 6% of the CPU time. Note that this did not include any image/science processing, that was handled by another processing system. So it doesn’t take much to actually run the spacecraft, all the horsepower is needed for science data processing.
— Emory Stagmer, Lead flight software engineer for NASA’s LCROSS mission (@VAXHeadroom)
Thanks for this comment. It was enlightening:).