This Envisat image was acquired over northern Chile’s Atacama Desert, the driest place on Earth outside of the Antarctic dry valleys.
Bounded on the west by the Pacific and on the east by the Andes, the Atacama Desert only knows rainfall between two and four times a century. The first sight of green in this Medium Resolution Imaging Spectrometer (MERIS) image occurs some 200 kilometres west of the coast, at the foothills of the Western Cordillera, where wispy white clouds start to make an appearance.
There are some parts of the desert where rainfall has never been recorded. The only moisture available comes from a dense fog known as camanchaca, formed when cold air associated with ocean currents originating in the Antarctic hits warmer air. This fog is literally harvested by plants and animals alike, including Atacama’s human inhabitants who use ‘fog nets’ to capture it for drinking water.
The landscape of the Atacama Desert is no less stark than its meteorology: a plateau covered with lava flows and salt basins. The conspicuous white area below the image centre is the Atacama Salt Flat, just to the south of the small village San Pedro de Atacama, regarded as the centre of the desert.
The Atacama is rich in copper and nitrates ? it has been the subject of border disputes between Chile and Bolivia for this reason – and so is strewn with abandoned mines. Today the European Southern Observatory (ESO) has located in high zones of the Atacama, astronomers treasuring the region’s remoteness and dry air. The Pan-American Highway runs north-south through the desert.
Along the Pacific coast, the characteristic tuft-shape of the Mejillones peninsula is visible, where the town of Antofagasta lies just south of Moreno Bay on the southern side of the formation.
This MERIS full resolution image was acquired on 10 January 2003 and has a spatial resolution of 200 metres.
Tiny Mimas is dwarfed by a huge white storm and dark waves on the edge of a cloud band in Saturn’s atmosphere.
Although the east-west winds on Saturn are stronger than on Earth or even Jupiter, the contrast in appearance between these zones is more muted, and the departures of the wind speeds from east to west are lower.
The image was taken with the Cassini spacecraft narrow angle camera on Sept. 25, 2004, at a distance of 7.8 million kilometers (4.8 million miles) from Saturn through a filter sensitive to wavelengths of infrared light centered at 727 nanometers. The image scale is 46 kilometers (29 miles) per pixel.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA’s Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging team is based at the Space Science Institute, Boulder, Colo.
Subaru telescope has witnessed a large galaxy in the act of devouring a small companion galaxy in a new image obtained by Yoshiaki Taniguchi (Tohoku University), Shunji Sasaki (Tohoku University), Nicolas Scoville (California Institute of Technology) and colleagues. The evidence is a wispy band of stars extending over 500 thousand light years, the faintest and longest known example of its kind.
Current theories of galaxy formation suggest that large galaxies like the Milky Way grow by consuming smaller dwarf galaxies. Evidence of this process can be found in our own galactic neighborhood. Some stars in the Milky Way appear to have once belonged to a small nearby galaxy called the Sagitarius Dwarf. Our closest large neighbor galaxy Andromeda also shows evidence for past galactic astronomy. However, in both cases these conclusions are inferred from “post-digestive” observations.
The destruction of dwarf galaxies is difficult to observe because dwarf galaxies are inherently faint and their light becomes increasingly diffuse as stars get pulled away by a larger galaxy. The only previously known observation of the destruction of a dwarf galaxy in progress is from the Advanced Camera for Surveys on the Hubble Space Telescope.
Taniguchi, Sasaki, Scoville and colleagues serendipitously discovered the large elliptical galaxy (COSMOS J100003+020146) pulling apart the dwarf galaxy (COSMOS J095959+020206) while observing an area of sky in the constellation Sextans to study the properties of galaxies over large scales in space and time. The pair of galaxies is about one billion light years away and the distance between the two galaxies is about 330 thousand light years.
The thin band of stars extending from the dwarf galaxy both toward and away from the large elliptical galaxy reveals that the gravity of the elliptical is tidally tearing the dwarf apart. Stars that are closest to the elliptical galaxy experience a stronger pull than stars in the center of the dwarf galaxy, and stars on the opposite side experience a weaker pull. As a result, the dwarf galaxy becomes stretched and looks as if it’s being pulled from two opposite directions even though there is only one galaxy doing the pulling. This effect is comparable to how two areas on the opposite sides of Earth experience high tide at the same time even though there is only one Moon tugging on Earth’s oceans.
The tidally torn strip of stars in the newly observed pair of galaxies is five times more extended and three times fainter in surface brightness than the one observed with Hubble Space Telescope. Subaru telescope’s ability to gather large amounts of light and focus it into a superbly sharp image was essential for this new discovery.
As astronomers find more examples of galactic cannibalism in action, our knowledge of the history of galaxies should become increasingly vivid. Although no human alive today will be able to witness the ultimate of fate of the newly discovered pair, chances are the elliptical galaxy will be able to complete the meal it’s begun and fully consume its neighbor.
Image credit: NASA
A place so barren that NASA uses it as a model for the Martian environment, Chile’s Atacama desert gets rain maybe once a decade. In 2003, scientists reported that the driest Atacama soils were sterile.
Not so, reports a team of Arizona scientists. Bleak though it may be, microbial life lurks beneath the arid surface of the Atacama’s absolute desert.
“We found life, we can culture it, and we can extract and look at its DNA,” said Raina Maier, a professor of soil, water and environmental science at the University of Arizona in Tucson.
The work from her team contradicts last year’s widely reported study that asserted the “Mars-like soils” of the Atacama’s core were the equivalent of the “dry limit of microbial life.”
Maier said, “We are saying, ‘What is the dry limit of life?’ We haven’t reached it yet.”
The Arizona researchers will publish their findings as a letter in the Nov. 19 issue of the journal Science. Maier’s co-authors include UA researchers Kevin Drees, Julie Neilson, David Henderson and Jay Quade and U.S. Geological Survey paleoecologist Julio Betancourt. The project was funded by the National Science Foundation and the National Institute for Environmental and Health Sciences, part of the National Institutes of Health.
The project began not as a search for current life but rather as an attempt to peer into the past and reconstruct the history of the region’s plant communities. Betancourt and Quade, a UA professor of geosciences, have been conducting research in the Atacama for the past seven years.
Some parts of the Atacama have vegetation, but the absolute desert of the Atacama’s core — an area Betancourt describes as “just dirt and rocks” — has none.
Nor does the area have cliffs which harbor ancient piles of vegetation, known as middens, collected and stored by long-gone rodents. Researchers use such fossil plant remains to tell what grew in a place long ago.
So to figure out whether the area had ever been vegetated, Quade and Betancourt had to search the soil for biologically produced minerals such as carbonates. To rule out the possibility that such soil minerals were being produced by present-day microorganisms, the two geoscientists teamed up with UA environmental microbiologist Maier.
In October of 2002, the researchers collected sterile soil samples along a 200-kilometer (120 miles) transect that ran from an elevation of 4,500 meters (almost 15,000 feet) to sea level.
Every 300 meters (about 1,000 feet) along the transect, the team dug a pit and took two soil samples from a depth of 20 to 30 centimeters (8 to 12 inches). To ensure the sample was sterile, every time he took the sample, Betancourt had to clean his hand trowel with Lysol.
“When it’s still, it’s not a problem,” he said. “But when the wind’s blowing at 40 miles per hour, it’s a little more complicated.”
The geoscientists brought their test tubes full of desert soil back to Maier’s lab, where her team wetted the soil samples with sterile water, let them sit for 10 days, and then grew bacteria from them.
“We brought ’em back alive, it turns out,” Betancourt said.
Maier and her team have not yet identified the bacteria that come from the extremely arid environment of the Atacama’s core. She can say they are unusual.
She said, “As a microbiologist, I am interested in how these microbial communities evolve and respond. Can we discover new microbial activities in such extreme environments? Are those activities something we can exploit?”
The team’s findings suggest that how researchers look for life on Mars may affect whether life is found on the Red Planet.
The other researchers who had tested soil from the Atacama had looked for life only down to the depth of four inches. So one rule, Quade quipped, is, “Don’t just scratch the surface.”
Saying that Mars researchers are most likely looking for a needle in a very large haystack, Maier said, “If you aren’t very careful about your Mars protocol, you could miss life that’s there.”
Peter H. Smith, the UA planetary scientist who is the principal investigator for the upcoming Phoenix mission to Mars, said, “Scientists on the Phoenix Mission suspect that there are regions on Mars, arid like the Atacama Desert in Chile, that are conducive to microbial life.” He added, “We will attempt an experiment similar to Maier’s group on Mars during the summer of 2008.”
As for Maier and her colleagues, Betancourt said, “We’re very, very interested in life on Earth and how it functions.”
Maier suspects the microbes may persist in a state of suspended animation during the Atacama Desert’s multi-decadal dry spells.
So the team’s next step is to return to Chile and do experiments on-site. One option is what Maier calls “making our own rainfall event” — adding water to the Atacama’s soils — and seeing whether the team could then detect microbial activity.
Star formation is one of the most basic phenomena in the Universe. Inside stars, primordial material from the Big Bang is processed into heavier elements that we observe today. In the extended atmospheres of certain types of stars, these elements combine into more complex systems like molecules and dust grains, the building blocks for new planets, stars and galaxies and, ultimately, for life. Violent star-forming processes let otherwise dull galaxies shine in the darkness of deep space and make them visible to us over large distances.
Star formation begins with the collapse of the densest parts of interstellar clouds, regions that are characterized by comparatively high concentration of molecular gas and dust like the Orion complex (ESO PR Photo 20/04) and the Galactic Centre region (ESO Press Release 26/03). Since this gas and dust are products of earlier star formation, there must have been an early epoch when they did not yet exist.
But how did the first stars then form? Indeed, to describe and explain “primordial star formation” – without molecular gas and dust – is a major challenge in modern Astrophysics.
A particular class of relatively small galaxies, known as “Blue Dwarf Galaxies”, possibly provide nearby and contemporary examples of what may have occurred in the early Universe during the formation of the first stars. These galaxies are poor in dust and heavier elements. They contain interstellar clouds which, in some cases, appear to be quite similar to those primordial clouds from which the first stars were formed. And yet, despite the relative lack of the dust and molecular gas that form the basic ingredients for star formation as we know it from the Milky Way, those Blue Dwarf Galaxies sometimes harbour very active star-forming regions. Thus, by studying those areas, we may hope to better understand the star-forming processes in the early Universe.
Very active star formation in NGC 5253
NGC 5253 is one of the nearest of the known Blue Dwarf Galaxies; it is located at a distance of about 11 million light-years in the direction of the southern constellation Centaurus. Some time ago a group of European astronomers [1] decided to take a closer look at this object and to study star-forming processes in the primordial-like environment of this galaxy.
True, NGC 5253 does contains some dust and heavier elements, but significantly less than our own Milky Way galaxy. However, it is quite extreme as a site of intense star formation, a profuse “starburst galaxy” in astronomical terminology, and a prime object for detailed studies of large-scale star formation.
ESO PR Photo 31a/04 provides an impressive view of NGC 5253. This composite image is based on a near-infrared exposure obtained with the multi-mode ISAAC instrument mounted on the 8.2-m VLT Antu telescope at the ESO Paranal Observatory (Chile), as well as two images in the optical waveband obtained from the Hubble Space Telescope data archive (located at ESO Garching). The VLT image (in the K-band at wavelength 2.16 ?m) is coded red, the HST images are blue (V-band at 0.55 ?m) and green (I-band at 0.79 ?m), respectively.
The enormous light-gathering capability and the fine optical quality of the VLT made it possible to obtain the very detailed near-infrared image (cf. PR Photo 31b/04) during an exposure lasting only 5 min. The excellent atmospheric conditions of Paranal at the time of the observation (seeing 0.4 arcsec) allow the combination of space- and ground-based data into a colour photo of this interesting object.
A major dust lane is visible at the western (right) side of the galaxy, but patches of dust are visible all over, together with a large number of colourful stars and stellar clusters. The different colour shades are indicative of the ages of the objects and the degree of obscuration by interstellar dust. The near-infrared VLT image penetrates the dust clouds much better than the optical HST images, and some deeply embedded objects that are not detected in the optical therefore appear as red in the combined image.
Measuring the size and infrared brightness of each of these “hidden” objects, the astronomers were able to distinguish stars from stellar clusters; they count no less than 115 clusters. It was also possible to derive their ages – about 50 of them are very young in astronomical terms, less than 20 million years. The distribution of the masses of the cluster stars ressembles that observed in clusters in other starburst galaxies, but the large number of young clusters and stars is extraordinary in a galaxy as small as NGC 5253.
When images are obtained of NGC 5253 at progressively longer wavelengths, cf. ESO PR Photo 31c/04 which was taken with the VLT in the L-band (wavelength 3.7 ?m), the galaxy looks quite different. It no longer displays the richness of sources seen in the K-band image and is now dominated by a single bright object. By means of a large number of observations in different wavelength regions, from the optical to the radio, the astronomers find that this single object emits as much energy in the infrared part of the spectrum as does the entire galaxy in the optical region. The amount of energy radiated at different wavelengths shows that it is a young (a few million years), very massive (more than one million solar masses) stellar cluster, embedded in a dense and heavy dust cloud (more than 100,000 solar masses of dust; the emission seen in PR Photo 31c/04 comes from this dust).
A view towards the beginnings
These results show that a galaxy as tiny as NGC 5253, almost 100 times smaller than our own Milky Way galaxy, can produce hundreds of compact stellar clusters. The youngest of these clusters are still deeply embedded in their natal clouds, but when observed with infrared-sensitive instruments like ISAAC at the VLT, they stand out as very bright objects indeed.
The most massive of these clusters holds about one million solar masses and shines as much as 5000 very bright massive stars. It may well be very similar to the progenitors in the early Universe of the old globular clusters we now observe in large galaxies like the Milky Way. In this sense, NGC 5253 provides us with a direct view towards our own beginnings.
Note
[1] The group consists of Giovanni Cresci (University of Florence, Italy), Leonardo Vanzi (ESO-Chile) and Marc Sauvage (CEA/DSN/DAPNIA, Saclay, France). More details about the present investigation is available in a research paper (“The Star Cluster population of NGC 5253” by G. Cresci et al.) to appear soon in the leading research journal Astronomy & Astrophysics (a preprint is available as astro-ph/0411486).
A speech by Arthur C. Clarke in the 1960s, explaining geostationary satellites gave Pearson the inspiration for the whole concept of space elevators while he was working at the NASA Ames Research Center in California during the days of the Apollo Moon landings.
“Clarke said that a good way to understand communications satellites in geostationary orbit was to imagine them at the top of a tall tower, perched 35,786 km (22,236 miles) above the Earth,” Pearson recalls, “I figured, why not build an actual tower?”
He realized that it was theoretically possible to park a counterweight, like a small asteroid, in geostationary orbit and then extend a cable down and affix it at the Earth’s equator. In theory, elevator cars could travel up the long cable, and transfer cargo out of the Earth’s gravity well and into space at a fraction of the price delivered by chemical rockets.
… in theory. The problem then, and now, is that the material required to support even just the weight of the cable in the Earth’s gravity doesn’t exist. Only in the last few years, with the advent of carbon nanotubes – with a tensile strength in the ballpark – people have finally moved past the laughing stage, and begun investigating it seriously. And while carbon nanotubes have been manufactured in small quantities in the lab, engineers are still years away from weaving them together into a long cable that could provide the necessary strength.
Pearson knew the technical challenges were formidable, so he wondered, “why not build an elevator on the Moon?”
On the Moon, the force of gravity is one sixth of what we feel here on Earth, and a space elevator cable is well within our current manufacturing technology. Stretch a cable up from the surface of the Moon, and you’d have an inexpensive method of delivering minerals and supplies into Earth orbit.
A lunar space elevator would work differently than one based on Earth. Unlike our own planet, which rotates every 24 hours, the Moon only turns on its axis once every 29 days; the same amount of time it takes to complete one orbit around the Earth. This is why we can only ever see one side of the Moon. The concept of geostationary orbit doesn’t really make sense around the Moon.
There are, however, five places in the Earth-Moon system where you could put an object of low mass – like a satellite… or a space elevator counterweight – and have them remain stable with very little energy: the Earth-Moon Lagrange points. The L1 point, a spot approximately 58,000 km above the surface of the Moon, will work perfectly.
Imaging that you’re floating in space at a point between the Earth and the Moon where the force of gravity from both is perfectly balanced. Look to your left, and the Moon is approximately 58,000 km (37,000 miles) away; look to your right and the Earth is more than 5 times that distance. Without any kind of thrusters, you’ll eventually drift out of this perfect balancing point, and then start accelerating towards either the Earth or the Moon. L1 is balanced, but unstable.
Pearson is proposing that NASA launch a spacecraft carrying a huge spool of cable to the L1 point. It would slowly back away from the L1 point as it unspooled its cable down to the surface of the Moon. Once the cable was anchored to the lunar surface, it would provide tension, and the entire cable would hang in perfect balance, like a pendulum pointed towards the ground. And like a pendulum, the elevator would always keep itself aligned perfectly towards the L1 point, as the Earth’s gravity tugged away at it. The mission could even include a small solar powered climber which could climb up from the lunar surface to the top of the cable, and deliver samples of moon rocks into a high Earth orbit. Further missions could deliver whole teams of climbers, and turn the concept into a mass production operation.
The advantage of connecting an elevator to the Moon instead of the Earth is the simple fact that the forces involved are much smaller – the Moon’s gravity is 1/6th that of Earth’s. Instead of exotic nanotubes with extreme tensile strengths, the cable could be built using high-strength commercially available materials, like Kevlar or Spectra. In fact, Pearson has zeroed in on a commercial fibre called M5, which he calculates would only weigh 6,800 kg for a full cable that would support a lifting capacity of 200 kg at the base. This is well within the capabilities of the most powerful rockets supplied by Boeing, Lockheed Martin and Arianespace. One launch is it takes to put an elevator on the Moon. And once the elevator was installed, you could start reinforcing it with additional materials, like glass and boron, which could be manufactured on the Moon
So, what would you do with a space elevator connected to the Moon? “Plenty,” says Pearson, “there are all kinds of resources on the Moon which would be much easier to gather there and bring into orbit rather than launching them from the Earth. Lunar regolith (moon dirt) could be used as shielding for space stations; metals and other minerals could be mined from the surface and used for construction in space; and if ice is discovered at the Moon’s south pole, you could supply water, oxygen and even fuel to spacecraft.”
If water ice does turn up at the Moon’s south pole, you could run a second cable there, and then connect it at the end to the first cable. This would allow a southern Moon base to deliver material into high-Earth orbit without having to travel along the ground to the base of the first elevator.
It’d be great for rocks, but not for people. Even if a climber moved up the cable at hundreds of kilometres an hour, astronauts would be traveling for weeks, and be exposed to the radiation of deep space. But when you’re talking about cargo, slow and steady wins the race.
Pearson first published his idea of a lunar elevator back in 1979 and he’s been pitching it ever since. This year, though, NASA’s not laughing, they’re listening. Pearson’s company, Star Technology and Research, was recently awarded a $75,000 grant from NASA’s Institute for Advanced Concepts (NIAC) for a six-month study to investigate the idea further. If the idea proves to be promising, Pearson could receive a larger grant to begin overcoming some of the engineering challenges, and look for partners inside and NASA and out to help in its development.
NIAC looks for ideas which are way outside NASA’s normal comfort zone of technologies – for example… an elevator on the Moon – and helps develop them to the point that many of the risks and unknowns have been ironed out.
Pearson hopes this grant will help him make the case to NASA that a lunar elevator would be an invaluable contribution to the new Moon-Mars space exploration vision, supporting future lunar bases and industries in space. And it would give engineers a way to understand the difficulties of building elevators into space without taking on the immense challenge of building it on Earth first.
It’s the year 2027 and NASA’s Vision for Space Exploration is progressing right on schedule. The first interplanetary spacecraft with humans aboard is on course for Mars. However, halfway into the trip, a gigantic solar flare erupts, spewing lethal radiation directly at the spacecraft. But, not to worry. Because of research done by former astronaut Jeffrey Hoffman and a group of MIT colleagues back in the year 2004, this vehicle has a state-of-the-art superconducting magnetic shielding system that protects the human occupants from any deadly solar emissions.
New research has recently begun to examine the use of superconducting magnet technology to protect astronauts from radiation during long-duration spaceflights, such as the interplanetary flights to Mars that are proposed in NASA’s current Vision for Space Exploration.
The principal investigator for this concept is former astronaut Dr. Jeffrey Hoffman, who is now a professor at the Massachusetts Institute of Technology (MIT).
Hoffman’s concept is one of 12 proposals that began receiving funding last month from the NASA Institute for Advanced Concepts (NIAC). Each gets $75,000 for six-months of research to make initial studies and identify challenges in developing it. Projects that make it through that phase are eligible for as much as $400,000 more over two years.
The concept of magnetic shielding is not new. As Hoffman says, “the Earth has been doing it for billions of years!”
Earth’s magnetic field deflects cosmic rays, and an added measure of protection comes from our atmosphere which absorbs any cosmic radiation that makes its way through the magnetic field. Using magnetic shielding for spacecraft was first proposed in the late 1960’s and early 70’s, but was not actively pursued when plans for long-duration spaceflight fell by the wayside.
However, the technology for creating superconducting magnets that can generate strong fields to shield spacecraft from cosmic radiation has only recently been developed. Superconducting magnet systems are desirable because they can create intense magnetic fields with little or no electrical power input, and with proper temperatures they can maintain a stable magnetic field for long periods of time.
One challenge, however, is developing a system that can create a magnetic field large enough to protect a bus-sized, habitable spacecraft. Another challenge is keeping the system at temperatures near absolute zero (0 Kelvin, -273 C, -460 F), which gives the materials superconductive properties. Recent advances in superconducting technology and materials have provided superconductive properties at higher than 120 K (-153 C, -243 F).
There are two types of radiation that need to be addressed for long-duration human spaceflight, says William S. Higgins, an engineering physicist who works on radiation safety at Fermilab, the particle accelerator near Chicago, IL. The first are solar flare protons, which would come in bursts following a solar flare event. The second are galactic cosmic rays, which, although not as lethal as solar flares, they would be a continuous background radiation to which the crew would be exposed. In an unshielded spacecraft, both types of radiation would result in significant health problems, or death, to the crew.
The easiest way to avoid radiation is to absorb it, like wearing a lead apron when you get an X-ray at the dentist. The problem is that this type of shielding can often be very heavy, and mass is at a premium with our current space vehicles since they need to be launched from the Earth’s surface. Also, according to Hoffman, if you use just a little bit of shielding, you can actually make it worse, because the cosmic rays interact with the shielding and can create secondary charged particles, increasing the overall radiation dose.
Hoffman foresees using a hybrid system that employs both a magnetic field and passive absorption. “That’s the way the Earth does it,” Hoffman explained, “and there’s no reason we shouldn’t be able to do that in space.”
One of the most important conclusions to the second phase of this research will be to determine if using superconducting magnet technology is mass effective.
“I have no doubt that if we build it big enough and strong enough, it will provide protection,” Hoffman said. “But if the mass of this conducting magnet system is greater than the mass just to use passive (absorbing) shielding, then why go to all that trouble?”
But that’s the challenge, and the reason for this study. “This is research,” Hoffman said. “I’m not partisan one way or the other; I just want to find out what’s the best way.”
Assuming Hoffman and his team can demonstrate that superconducting magnetic shielding is mass effective, the next step would be doing the actual engineering of creating a large enough (albeit lightweight) system, in addition to the fine-tuning of maintaining magnets at ultra-cold superconducting temperatures in space. The final step would be to integrate such a system into a Mars-bound spacecraft. None of these tasks are trivial.
The examinations of maintaining the magnetic field strength and the near-absolute zero temperatures of this system in space is already occurring in an experiment that is scheduled to be launched to the International Space Station for a three-year stay. The Alpha Magnetic Spectrometer (AMS) will be attached to the outside of the station and search for different types of cosmic rays. It will employ a superconducting magnet to measure each particle’s momentum and the sign of its charge. Peter Fisher, a physics professor also from MIT works on the AMS experiment, and is cooperating with Hoffman on his research of superconducting magnets. A graduate student and a research scientist are also working with Hoffman.
NIAC was created in 1998 to solicit revolutionary concepts from people and organizations outside the space agency that could advance NASA’s missions. The winning concepts are chosen because they “push the limits of known science and technology,” and “show relevance to the NASA mission,” according to NASA. These concepts are expected to take at least a decade to develop.
Hoffman flew in space five times and became the first astronaut to log more than 1,000 hours on the space shuttle. On his fourth space flight, in 1993, Hoffman participated in the first Hubble Space Telescope servicing mission, an ambitious and historic mission that corrected the spherical aberration problem in the telescope’s primary mirror. Hoffman left the astronaut program in 1997 to become NASA’s European Representative at the US Embassy in Paris, and then joined MIT in 2001.
Hoffman knows that to make a space mission possible, there’s a lot of idea development and hard engineering which precedes it.
“When it comes to doing things in space, if you’re an astronaut, you go and do it with your own hands,” Hoffman said. “But you don’t fly in space forever, and I still would like to make a contribution.”
Does he see his current research as important as fixing the Hubble Space Telescope?
“Well, not in the immediate sense,” he said. “But on the other hand, if we ever are going to have a human presence throughout the solar system we need to be able to live and work in regions where the charged particle environment is pretty severe. If we can’t find a way to protect ourselves from that, it will be a very limiting factor for the future of human exploration.”
NASA’s X-43A research vehicle screamed into the record books again Tuesday, demonstrating an air-breathing engine can fly at nearly 10 times the speed of sound. Preliminary data from the scramjet-powered research vehicle show its revolutionary engine worked successfully at nearly Mach 9.8, or 7,000 mph, as it flew at about 110,000 feet.
The high-risk, high-payoff flight, originally scheduled for Nov. 15, took place in restricted airspace over the Pacific Ocean northwest of Los Angeles. The flight was the last and fastest of three unpiloted flight tests in NASA’s Hyper-X Program. The program’s purpose is to explore an alternative to rocket power for space access vehicles.
“This flight is a key milestone and a major step toward the future possibilities for producing boosters for sending large and critical payloads into space in a reliable, safe, inexpensive manner,” said NASA Administrator Sean O’Keefe. “These developments will also help us advance the Vision for Space Exploration, while helping to advance commercial aviation technology,” Administrator O’Keefe said.
Supersonic combustion ramjets (scramjets) promise more airplane-like operations for increased affordability, flexibility and safety in ultra high-speed flights within the atmosphere and for the first stage to Earth orbit. The scramjet advantage is once it is accelerated to about Mach 4 by a conventional jet engine or booster rocket, it can fly at hypersonic speeds, possibly as fast as Mach 15, without carrying heavy oxygen tanks, as rockets must.
The design of the engine, which has no moving parts, compresses the air passing through it, so combustion can occur. Another advantage is scramjets can be throttled back and flown more like an airplane, unlike rockets, which tend to produce full thrust all the time.
“The work of the Langley-Dryden team and our Vehicle Systems Program has been exceptional,” said NASA’s Associate Administrator for Aeronautics Research J. Victor Lebacqz. “This shows how much we can accomplish when we manage the risk and work together toward a common goal. NASA has made a tremendous contribution to the body of knowledge in aeronautics with the Hyper-X program, as well as making history.”
The flight was postponed by one day when repair of an instrumentation problem with the X-43A caused a delay. When the preflight checklist was resumed, not enough time remained to meet the FAA launch deadline of 7 p.m. EST.
Today, the X-43A, attached to its modified Pegasus rocket booster, took off from Dryden Flight Research Center at Edwards Air Force Base, Calif., tucked under the wing of the B-52B launch aircraft. The booster and X-43A were released from the B-52B at 40,000 feet and the booster?s engine ignited, taking the X-43A to its intended altitude and speed. The X-43A then separated from the booster and accelerated on scramjet power to a brief flight at nearly Mach 10.
NASA’s Langley Research Center, Hampton, Va., and Dryden jointly conduct the Hyper-X Program. NASA’s Aeronautics Research Mission Directorate, Washington, manages it. ATK-GASL (formerly Microcraft, Inc.) at Tullahoma, Tenn., and Ronkonkoma, N.Y., built the X-43A aircraft and the scramjet engine, and Boeing Phantom Works, Huntington Beach, Calif., designed the thermal protection and onboard systems. The booster is a modified first stage of a Pegasus rocket built by Orbital Sciences Corp, Chandler, Ariz.
For more information about the Hyper-X program and the flights of the X-43A, visit:
“Swift,” a new NASA satellite, will head for the heavens Nov. 17, designed to detect gamma-ray bursts and whip around to catch them in the act. And the trigger software that makes the flying observatory smart enough to do this comes from the Space Science team at the Los Alamos National Laboratory.
Gamma-ray bursts, first discovered by Los Alamos in the course of nuclear nonproliferation data analysis, occur randomly throughout the universe. They are the most powerful explosions known to mankind, exceeded only by the Big Bang. Swift’s Burst Alert Telescope will detect and locate about two bursts a week and relay their positions to the ground in less than 15 seconds.
By studying the bursts, scientists have the opportunity to illuminate some of the earliest mysteries of the universe. “We believe Swift is capable of observing gamma-ray bursts right back through time to the very first stars that ever formed after the Big Bang,” said lead Los Alamos project scientist Ed Fenimore, a Laboratory Fellow.
The main mission objectives for Swift are to determine what makes gamma-ray bursts tick, and perhaps more importantly, determine how the burst evolves and interacts with the surroundings: The burst’s afterglow is the only place in the universe where something 10 times the size of the Earth is moving 0.9999 the speed of light.
The component with which Los Alamos is most intimately involved is the Burst Alert Telescope (BAT), hardware built and developed by Goddard Space Flight Center, under the direction of Neal Gehrels. The Los Alamos role was in developing the BAT’s onboard scientific software that, as Fenimore says, “basically tells Swift when to point, and where to point.”
The onboard “trigger” software scans the data from the BAT and determines when a gamma-ray burst is in progress. “Although human eyes on the ground can easily do this, doing it blindly on the satellite is quite difficult,” Fenimore said. “In fact, in past gamma-ray burst experiments, it has been common that nine out of 10 triggers are false alarms. False alarms would be disastrous since Swift will actually slew itself around to try to observe the false source.” Swift turns in space within 70 to 100 seconds to view the fading event.
The GRBs location information from Swift will also be broadcast to waiting robotic telescopes on the ground. Among them is the Los Alamos RAPTOR telescope, which can point anywhere within 6 seconds and capture the burst while it is still happening.
The critical second piece of the Los Alamos effort is the software to locate the gamma-ray burst so that the satellite knows exactly which direction it should orient its other telescopes. The BAT uses an imaging technique pioneered by Los Alamos called coded-aperture imaging, and most recently used by Los Alamos aboard the High Energy Transient Explorer (HETE) satellite.
In the imaging equipment aboard Swift, 54,000 pinholes in a panel of lead the size of a full sheet of plywood produce an “image,” actually thousands of overlapping images (approximately 30,000 of them). The Los Alamos software must unscramble those overlapping images and make one stronger, brighter picture from which the precise location of the gamma-ray burst can be found, while eliminating known sources and statistical variations.
David Palmer, a Los Alamos astrophysicist with a special expertise in coded-aperture imaging and clever algorithms, is the key person for virtually all of the scientific software on BAT, some 30,000 lines of code. For the software to handle the required tasks takes a vast amount of computer code, with hundreds of interacting components. “It was thanks to his grasp of the whole picture in all its complexity that Palmer was able to develop this scientific package” Fenimore said, “Palmer probably did the work of 20 people on this project.”
To prepare for the ongoing software work during the craft’s two-year life, Fenimore and his team have developed complex simulations at Los Alamos to recreate the BAT instrument’s likely behavior and experiences in space. The simulator allows the team to practice responding to potential issues that may require tuning of the software. The software was designed with “lots of knobs” as Fenimore phrases it, to allow the team to continuously tweak software. A special challenge for Palmer has been the relative age of the computer aboard the craft: it is a 25 MHz computer, 100 times slower than the computers most people have at home.
The Swift observatory is scheduled for launch at 12:09 p.m., EST Wednesday, Nov. 17 at, with a one-hour launch window. The satellite is aboard a Boeing Delta II rocket, launching from Cape Canaveral Air Force Station (CCAFS), Fla.
Swift is part of NASA’s medium explorer (MIDEX) program. The hardware was developed by an international team from the United States, the United Kingdom and Italy, with additional scientific involvement in France, Japan, Germany, Denmark, Spain and South Africa.
This image, taken by the High Resolution Stereo Camera (HRSC) on board ESA?s Mars Express spacecraft, shows the detailed structure of Coprates Catena, a southern part of the Valles Marineris canyon system on Mars.
The image was taken during orbit 438 with a ground resolution of approximately 43 metres per pixel. The displayed region covers an area centred at about latitude 14? South and longitude 301? East.
Coprates Catena is a chain of collapsed structures, which run parallel to the main valley Coprates Chasma.
These collapsed structures vary between 2500 and 3000 metres deep, which is far less than the depth of the main valley at 8000 metres. A few landslides can be seen on the valley walls.
The valley chains have no connection to the lowland plains as compared to the main valleys. This indicates that their origin is solely due to the expansion of the surface, or collapse, with removal of underlying material (possibly water or ice).
On the valley floor, brighter layers are exposed, which could be material of the same composition as seen in other parts of Valles Marineris, where sulphates have been measured by the OMEGA spectrometer instrument on board Mars Express.