[/caption]
NASA has just powered down its last mainframe computer. Umm, everyone remembers what a mainframe computer is, right? Well, you certainly must recall working with punched cards, paper tape, and/or magnetic tape, correct? That does sound a little archaic. “But all things must change,” wrote Linda Cureton on the NASA CIO blog. “Today, they are the size of a refrigerator but in the old days, they were the size of Cape Cod.”
The last mainframe being used by NASA, the IBM Z9 Mainframe, was being used at the Marshall Space Flight Center. Cureton described the mainframe as a “ big computer that is known for being reliable, highly available, secure, and powerful. They are best suited for applications that are more transaction oriented and require a lot of input/output – that is, writing or reading from data storage devices.”
In the 1960’s users gained access to the huge mainframe computer through specialized terminals using the punched cards. By the 1980s, many mainframes supported graphical terminals where people could work, but not graphical user interfaces. This format of end-user computing became obsolete in the 1990s when personal computers came to the forefront of computing.
Most modern mainframes are not quite so huge, and excel at redundancy and reliability. These machines can run for long periods of time without interruption. Cureton says that even though NASA has shut down its last one, there is still a requirement for mainframe capability in many other organizations. “The end-user interfaces are clunky and somewhat inflexible, but the need remains for extremely reliable, secure transaction oriented business applications,” she said.
But today, all you need to say is, “there’s an app for it!” Cureton said.
Mainframes as we knew them are dead, but the idea behind them is not. The monolithic approach to big computing has been replaced with a cheaper, more flexible parallel architectural model (aka “distributed computing”).
Agreed. As far as I’m concerned, when virtualization extensions hit the x86 platform allowing practically native-speed operation of virtual machines that was the death sentence to mainframes. Now you can run a critical application on a standard PC platform and be entirely tolerant of hardware failures. With the right options configured in your hypervisor, your application will simultaneously run identically on two different computers so if one goes down for whatever reason the other will be ready to instantly take over.
It’s functionality that requires expensive licensing from the perspective of a PC builder, but in comparison to mainframe system costs it’s a drop in the bucket.
Some Telco hardware has a similar approach; twin processors receive identical input, run the same programme, but only one’s output is used. A hardware comparator identifies any difference in the outputs, makes a 50/50 guess and swaps processors. At least the other is up with the play. (Unfortunately you can end up with “I don’t know, do you know? No, I don’t know do you know?” hung scenarios occasionally.) Uh, reload, uh, change cards. Telcos require pretty high reliability, but if I was on the way to Mars I would like more reliability than that accepted for the reduction of risk that money could be lost.
it’s a shame how much the words NASA and Shutdown seem to go together lately.
is this for good or bad or just a regular upgrade?
It is like credit rating, if you have never borrowed you have no credit rating and can’t borrow. If you have no Netpresence , you you have no credibility and cannot post.
It is the death of individualism, and an ominous trend.
ha! you sound a bit like Yossarian there- but he was trying to get OUT of something.
what a topic for a history book though, computers at NASA.
I can remember this like it was yesterday, Wow! how time flies…