Saturn Has an Unusual Hot Spot

Astronomers using the Keck I telescope in Hawaii are learning much more about a strange, thermal “hot spot” on Saturn that is located at the tip of the planet’s south pole. In what the team is calling the sharpest thermal views of Saturn ever taken from the ground, the new set of infrared images suggest a warm polar vortex at Saturn’s south pole — the first to ever be discovered in the solar system. This warm polar cap is home to a distinct compact hot spot, believed to contain the highest measured temperatures on Saturn. A paper announcing the results appears in the Feb. 4th issue of “Science.”

A “polar vortex” is a persistent, large-scale weather pattern, likened to a jet stream on Earth that occurs in the upper atmosphere. On Earth, the Arctic Polar Vortex is typically located over eastern North America in Canada and plunges cold artic air to the Northern Plains in the United States. Earth’s Antarctic Polar Vortex, centered over Antarctica, is responsible for trapping air and creating unusual chemistry, such as the effects that create the “ozone hole.” Polar vortices are found on Earth, Jupiter, Mars and Venus, and are colder than their surroundings. But new images from the W. M. Keck Observatory show the first evidence of a polar vortex at much warmer temperatures. And the warmer, compact region at the pole itself is quite unusual.

“There is nothing like this compact warm cap in the Earth’s atmosphere,” said Dr. Glenn S. Orton, of the Jet Propulsion Laboratory in Pasadena and lead author of the paper describing the results. “Meteorologists have detected sudden warming of the pole, but on Earth this effect is very short-term. This phenomenon on Saturn is longer-lived because we’ve been seeing hints of it in our data for at least two years.”

The puzzle isn’t that Saturn’s south pole is warm; after all, it has been exposed to 15 years of continuous sunlight, having just reached its summer Solstice in late 2002. But both the distinct boundary of a warm polar vortex some 30 degrees latitude from the southern pole and a very hot “tip” right at the pole were completely unexpected.

?If the increased southern temperatures are solely the result of seasonality, then the temperature should increase gradually with increasing latitude, but it doesn’t,? added Dr. Orton. ?We see that the temperature increases abruptly by several degrees near 70 degrees south and again at 87 degrees south.?

The abrupt temperature changes may be caused by a concentration of sunlight-absorbing particulates in the upper atmosphere which trap in heat at the stratosphere. This theory explains why the hot spot appears dark in visible light and contains the highest measured temperatures on the planet. However, this alone does not explain why the particles themselves are constrained to the general southern part of Saturn and particularly to a compact area near the tip of Saturn’s south pole. Forced downwelling of relatively dry air would explain this effect, which is consistent with other observations taken of the tropospheric clouds, but more observations are needed.

More details may be forthcoming from an infrared spectrometer on the joint NASA/ESA Cassini mission which is currently orbiting Saturn. The Composite Infrared Spectrometer (CIRS) measures continuous spectral information spanning the same wavelengths as the Keck observations, but the two experiments are expected to complement each other. Between March and May in 2005, the CIRS instrument on Cassini will be able to look at the south polar region in detail for the first time. The discovery of the hot spot at Saturn’s south pole has prompted the CIRS science team, one of whom is Dr. Orton, to spend more time looking at this area.

“One of the obvious questions is whether Saturn’s north pole is anomalously cold and whether a cold polar vortex has been established there,? added Dr. Orton. ?This is a question that can only be answered by the Cassini’s CIRS experiment in the near term, as this region can not be seen from Earth using ground-based instruments.”

Observations of Saturn were taken in the imaging mode of the Keck Long Wavelength Spectrometer (LWS) on February 4, 2004. Images were obtained at 8.00 microns, which is sensitive to stratospheric methane emission, and also at 17.65 and 24.5 microns, which is sensitive to temperatures at various layers in Saturn’s upper troposphere. The full image of the planet was mosaicked from many sets of individual exposures.

Future work observing Saturn will include more high-resolution thermal imaging of Saturn, particularly due to the fact that the larger polar vortex region may change in the next few years. The team has also discovered other phenomena which could be time dependent and are best characterized by imaging instruments at Keck, such as a series of east-west temperature oscillations, most prominently near 30 degrees south. These effects appear to be unrelated to anything in Saturn’s relatively featureless visible cloud system, but the variability is reminiscent of east-west temperature waves in Jupiter which move very slowly compared to the rapid jets tracked by cloud motions.

Funding for this research was provided by NASA’s Office of Space Sciences and Applications, Planetary Astronomy Discipline, and the NASA Cassini project. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA’s Science Mission Directorate, Washington, D.C.

The W.M. Keck Observatory is operated by the California Association for Research in Astronomy, a non-profit scientific partnership of the California Institute of Technology, the University of California, and NASA.

Original Source: W.M. Keck News Release

Natural Colour Image of Rhea

The trailing hemisphere of Saturn’s moon Rhea seen here in natural color, displays bright, wispy terrain that is similar in appearance to that of Dione, another one of Saturn’s moon. At this distance however, the exact nature of these wispy features remains tantalizingly out of the reach of Cassini’s cameras.

At this resolution, the wispy terrain on Rhea looks like a thin coating painted onto the moon’s surface. Cassini images from December 2004 (see http://photojournal.jpl.nasa.gov/catalog/PIA06163) revealed that, when seen at moderate resolution, Dione’s wispy terrain is comprised of many long, narrow and braided fractures.

Images taken using red, green and blue spectral filters were combined to create this natural color view. The images were acquired with the Cassini spacecraft narrow angle camera on Jan. 16, 2005, at a distance of approximately 496,500 kilometers (308,600 miles) from Rhea and at a Sun-Rhea-spacecraft, or phase, angle of 35 degrees. Resolution in the original image was about 3 kilometers (2 miles) per pixel. The image has been rotated so that north on Rhea is up. Contrast was enhanced and the image was magnified by a factor of two to aid visibility.

The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA’s Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging team is based at the Space Science Institute, Boulder, Colo.

For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . For images visit the Cassini imaging team home page http://ciclops.org .

Original Source: NASA/JPL News Release

Report Says Beagle 2 Shouldn’t Have Flown

The British National Space Centre has today published the report of the ESA/UK Commission of Inquiry set up to investigate the circumstances and possible reasons that prevented completion of the Beagle 2 mission.

The report was always seen by BNSC and ESA as an internal inquiry. Its purpose was to learn lessons for the future. There were also concerns about the confidentiality of commercial information. The organisations involved were given a strong indication that the information they supplied was only for the use of the inquiry. For these reasons the report was not published. ESA and the UK did however think it right that the recommendations of the report should be published as these covered the most important issues

The Science and Technology Select Committee was also confidentially given a copy of the full report. Subsequently, in view of the Committee’s strongly held view that the report should be published in full, we have discussed the issue again with ESA and have persuaded them that the report should be published.

We have also had further discussions with the other organisations involved about now publishing the report and they are aware that the report is being published today. The contents of the report have not been agreed with the parties.

A full copy of the report, including recommendations, can be found at the following website address: http://www.bnsc.gov.uk/assets/channels/resources/press/report.pdf

NOTES TO EDITORS

1. The Beagle 2 inquiry was launched on February 11, 2004, by Lord Sainsbury, UK Minister for Science and Innovation, and Jean-Jacques Dordain, ESA Director General, to investigate the circumstances and possible reasons that prevented completion of the Beagle 2 mission.

2. The Inquiry Commission was set up jointly between ESA and BNSC and was chaired by the ESA Inspector General. The Commission included senior managers and experts from Europe and also from NASA and Russia. Its remit was to:

assess the available data/documentation acquired during development, integration and testing of the Beagle 2 lander on Earth and that pertaining to the cruise phase operations prior to release of the spacecraft to Mars;

analyse the programmatic environment (i.e. decision processes, funding level and resources, management and responsibilities, interactions between the various entities) throughout the project;

identify possible issues and shortcomings, both programmatic and technical, in the above and in the approach used, which might have contributed to the loss of the mission.

3. The recommendations from the inquiry were published on May 24, 2004, when ESA also announced the lessons learnt from the inquiry and its plans to implement the recommendations.

4. The Beagle 2 project was led by the Open University, providing the science lead, and EADS-Astrium, the prime industrial contractor responsible for the main design, development and management of the lander.

5. The Beagle 2 lander was funded through a partnership arrangement involving the Open University, EADS-Astrium, the Department of Trade and Industry (DTI), the Particle Physics and Astronomy Research Council (PPARC), the Office of Science and Technology and ESA. Funding also came from the National Space Science Centre and the Wellcome Trust. UK principal investigators for Beagle 2 in the UK came from the Open University (gas analysis package), Leicester University (environmental sensors and x-ray spectrometer) and Mullard Space Science Laboratory (imaging systems).

6. BNSC is a partnership of Government Departments and Research Councils with an interest in the development or exploitation of space technologies. BNSC is the UK Government body responsible for UK civil space policy, to help gain the best possible scientific, economic and social benefits from putting space to work.

Original Source: BNSC News Release

Atlas and Proton Launch on the Same Day

In a span of less than 10 hours, International Launch Services (ILS) placed two satellites in orbit today from space centers on opposite sides of the world.

The flights were carried out by, respectively, a Russian-built Proton/Breeze M vehicle from Baikonur and an American Atlas III vehicle from Cape Canaveral. ILS, a joint venture of Lockheed Martin of the United States (NYSE: LMT) and Khrunichev of Russia, markets both vehicles worldwide and manages the missions.

These back-to-back launches were the first missions in a busy year for ILS.

?No one else can do this,? said ILS President Mark Albrecht. ?The cornerstone of ILS is offering two independent vehicles, launching from independent launch sites, which enables us to service two customers at the same time.?

The Proton vehicle lifted off at 9:27 p.m. EST Wednesday (7:27 a.m. today in Baikonur, 2:27 today GMT), carrying the AMC-12 satellite for SES AMERICOM. After about 9 hours and 19 minutes of flight, the satellite separated from the launcher and went into orbit. AMC-12 is expected to go into service in April, providing communications for the Americas, Europe, the Middle East and Africa. The satellite was built by Alcatel Space of France.

The Atlas III vehicle, designated AC-206, lifted off from Cape Canaveral?s Space Launch Complex 36B at 2:41 a.m. EST (7:41 GMT) with a payload for the National Reconnaissance Office. The payload was released into orbit about 79 minutes later. Details about the payload and mission, known as NROL-23, are classified.

?What an accomplishment!? said Albrecht. ?We have had tandem missions before, and it?s challenging and exciting, especially for those of us watching from ILS headquarters. The teams on site focus on only one thing ? the success of their particular mission.?

Dual capabilities give ILS ?a robust launch tempo,? Albrecht said. ?Both vehicles launch commercial and government missions, which keep the manifests busy and keep the teams sharp.?

These launches set records for ILS, namely:

* Shortest timespan between launches on both vehicles: 5 hours and 14 minutes (previous records were 7 hours, 10 minutes on Aug. 21/22, 2002, and 9 hours, 12 minutes on June 30/July 1, 2000).
* 75th consecutive successful Atlas launch.
* Final launch of Atlas III vehicle, the second Atlas family to have achieved 100% success throughout its lifetime.
* 5th launch in 12 months for a single customer ? SES AMERICOM (AMC-10 on Atlas Feb. 5, 2004; AMC-11 on Atlas May 19, 2004; AMC-15 on Proton Oct. 15, 2004; AMC-16 on Atlas Dec. 16, 2004; and AMC-12 on Proton Feb. 3, 2005).

The next scheduled ILS mission is at Cape Canaveral in March, an Atlas V launch with the Inmarsat 4-F1 satellite. Another Atlas V is scheduled to launch the Mars Reconnaissance Orbiter for NASA in August. Proton missions planned through the rest of the year include communications satellites for DIRECTV, MEASAT, Telesat Canada, SES AMERICOM, SES GLOBAL and Arabsat.

This followed a year in which ILS launched 10 satellites, all successfully ? six on Atlas and four on Proton. The Russian government also used Proton for four missions. With a remarkable launch rate of 72 missions since 2000, the Atlas and Proton launch vehicles have consistently demonstrated the reliability and flexibility that have made them the preferred choice among satellite operators worldwide. Since the beginning of 2003, ILS has signed more new commercial contracts than all of its competitors combined.

ILS was formed in 1995, and is based in McLean, Va., a suburb of Washington, D.C.

Original Source: ILS News Release

Wallpaper: V838 Monocerotis

The Hubble Space Telescope’s latest image of the star V838 Monocerotis (V838 Mon) reveals dramatic changes in the illumination of surrounding dusty cloud structures. The effect, called a light echo, has been unveiling never-before-seen dust patterns ever since the star suddenly brightened for several weeks in early 2002.

The illumination of interstellar dust comes from the red supergiant star at the middle of the image, which gave off a pulse of light three years ago, somewhat similar to setting off a flashbulb in a darkened room. The dust surrounding V838 Mon may have been ejected from the star during a previous explosion, similar to the 2002 event.

The echoing of light through space is similar to the echoing of sound through air. As light from the stellar explosion continues to propagate outwards, different parts of the surrounding dust are illuminated, just as a sound echo bounces off of objects near the source, and later, objects further from the source. Eventually, when light from the back side of the nebula begins to arrive, the light echo will give the illusion of contracting, and finally it will disappear.

V838 Mon is located about 20,000 light-years away from Earth in the direction of the constellation Monoceros, placing the star at the outer edge of our Milky Way galaxy. The Hubble telescope has imaged V838 Mon and its light echo several times since the star’s outburst. Each time Hubble observes the event, different thin sections of the dust are seen as the pulse of illumination continues to expand away from the star at the speed of light, producing a constantly changing appearance. During the outburst event whose light reached Earth in 2002, the normally faint star suddenly brightened, becoming 600,000 times more luminous than our Sun.

The new image of V838 Mon, taken with Hubble’s Advanced Camera for Surveys, was prepared from images obtained through filters that isolate blue, green, and infrared light. These images have been combined to produce a full-color picture that approximates the true colors of the light echo and the very red star near the center.

Original Source: Hubble News Release

Rovers are Getting a Little Dusty

Since landing on Mars a year ago, NASA’s pair of six-wheeled geologists have been constantly exposed to martian winds and dust. Because the rovers use solar power and sunlight is currently limited on Mars, the rovers can only cover from 50 to 100 feet on a good day. The sunlight is seasonal and also power-limiting as the rover’s age and get covered by dust. Among the failure models for eventually retiring the rovers, an electronic glitch or dust accumulation are most likely than a mechanical breakage.

Both rovers have been coated by some dust falling out of the atmosphere during that time, with estimates of the dust thickness ranging from 1 to 10 micrometers, or between 1/100th and 1/10th the width of a single human hair.

Of the two, NASA’s Mars Exploration Rover Spirit is definitely the more dust-laden. The Opportunity rover appears to be collecting less dust, perhaps because of a cleaning by wind or even “scavenging” of dust by frost that forms on the rover some nights during the martian winter. In imagining the texture of the rocks found by the Opportunity rover, the mission team has compared them to spongy sandstone. They are pockmarked, porous, dried and cracked. The voids and holes in these spongy rocks may have arisen from repeated cycles of evaporation to harden the surfaces followed by a washing away to dissolve the more soluble interior portions.

NASA’s Mars Exploration Rover Spirit is definitely the more dust-laden. As a result, Spirit has gradually experienced a decline in power as the thin layer of dust has accumulated on the solar panels, blocking some of the sunlight that is converted to electricity. The panoramic camera team’s analysis indicates that the layer of dust on Spirit’s calibration target is about 70 percent thicker than that on Opportunity’s.

Prior to this mission, the Meridiani plains were compared to the Rust Belt states, those in the middle north of America (Michigan, Ohio, Pennsylvania). The other comparison was to the red dirt found in Oklahoma and northern Texas–the so-called Red River region. In addition to red dirt, the rovers have found bedrock. On earth, bedrock is common in northern New England, particularly Maine and New Hampshire, the Granite state. But the wind blows around enough dry dust on Mars to cover what might be exposed bedrock. This debris layer blankets most of the rest of the planet. Additionally, meteors have pulverized the martian surface leaving a thick crushed layer.

A portion of Mars’ water vapor is moving from the north pole toward the south pole during the current northern-summer and southern-winter period. The transient increase in atmospheric water at Meridiani, just south of the equator, plus low temperatures near the surface, contribute to appearance of the clouds and frost. Frost shows up some mornings on the rover itself. The possibility that it has a clumping effect on the accumulated dust on solar panels is under consideration as a factor in unexpected boosts of electric output from the panels.

The atmosphere of Mars contains water, but in miniscule amounts. “Even though we are currently seeing frequent clouds with Opportunity, if you squeezed all of the water out of the atmosphere, it would only be less than 100 microns deep, about the thickness of a human hair,” said Mark Lemmon of Texas A&M University’s College of Geosciences.

Because of the lack of water, weather on Mars has a lot to do with dust in the atmosphere. A small dust storm one month before the rovers landed spread small amounts of dust around the planet.

“Both rovers saw very dusty skies at first. It was only after the dust settled after a few months that Spirit could see the rim of the crater it was in, Gusev Crater, about 40 miles away,” Lemmon said.

British scientists have speculated that the British Mars Lander, Beagle 2, crashed because the atmosphere was thinner than usual as a result of heating caused by atmospheric dust from the December storm.

“I can think of at least three things could kill us,” said Cornell’s principal investigator for the Mars rovers, Steve Squyres, when discussing the mission lifetime with the Astrobiology Magazine. “The first is dust build-up on the solar arrays. But the dust build-up is not that bad, especially for Opportunity, and with spring approaching both vehicles should do ok for awhile.”

“The second thing is if something mechanical goes wrong,” said Squyres. “The rovers have a lot of moving parts, and we’ve seen a few mechanical funnies on Spirit. Nothing serious, but enough to catch your attention. Stuff could just wear out.”

“The third thing is, we’ve got a lot of single-string electronics in these vehicles,” said Squyres. “There’s not a lot of redundancy. Now, we have the ultimate redundancy in that there are two vehicles. But within each rover there are a lot of electrical parts that, if they just flat-out fail on us, the rover’s dead. Bang! It just dies overnight and never talks to us again. That could happen.”

Original Source: NASA Astrobiology Magazine

Missing Matter Could Be Clouds of Gas

NASA’s Chandra X-ray Observatory has discovered two huge intergalactic clouds of diffuse hot gas. These clouds are the best evidence yet that a vast cosmic web of hot gas contains the long-sought missing matter – about half of the atoms and ions in the Universe.

Various measurements give a good estimate of the mass-density of the baryons – the neutrons and protons that make up the nuclei of atoms and ions – in the Universe 10 billion years ago. However, sometime during the last 10 billion years a large fraction of the baryons, commonly referred to as “ordinary matter” to distinguish them from dark matter and dark energy, have gone missing.

“An inventory of all the baryons in stars and gas inside and outside of galaxies accounts for just over half the baryons that existed shortly after the Big Bang,” explained Fabrizio Nicastro of the Harvard-Smithsonian Center for Astrophysics, and lead author of a paper in the 3 February 2005 issue of Nature describing the recent research. “Now we have found the likely hiding place of the missing baryons.”

Nicastro and colleagues did not just stumble upon the missing baryons – they went looking for them. Computer simulations of the formation of galaxies and galaxy clusters indicated that the missing baryons might be contained in an extremely diffuse web-like system of gas clouds from which galaxies and clusters of galaxies formed.

These clouds have defied detection because of their predicted temperature range of a few hundred thousand to a million degrees Celsius, and their extremely low density. Evidence for this warm-hot intergalactic matter (WHIM) had been detected around our Galaxy, or in the Local Group of galaxies, but the lack of definitive evidence for WHIM outside our immediate cosmic neighborhood made any estimates of the universal mass-density of baryons unreliable.

The discovery of much more distant clouds came when the team took advantage of the historic X-ray brightening of the quasar-like galaxy Mkn 421 that began in October of 2002. Two Chandra observations of Mkn 421 in October 2002 and July 2003, yielded excellent quality X-ray spectral data. These data showed that two separate clouds of hot gas at distances from Earth of 150 million light years and 370 million light years were filtering out, or absorbing X-rays from Mkn 421.

The X-ray data show that ions of carbon, nitrogen, oxygen, and neon are present, and that the temperatures of the clouds are about 1 million degrees Celsius. Combining these data with observations at ultraviolet wavelengths enabled the team to estimate the thickness (about 2 million light years) and mass density of the clouds.

Assuming that the size and distribution of the clouds are representative, Nicastro and colleagues could make the first reliable estimate of average mass density of baryons in such clouds throughout the Universe. They found that it is consistent with the mass density of the missing baryons.

Mkn 421 was observed three times with Chandra’s Low-Energy Transmission Grating (LETG), twice in conjunction with the High Resolution Camera (May 2000 and July 2003) and once with the Advanced CCD Imaging Spectrometer (October 2002). The distance to Mkn 421 is 400 million light years.

NASA’s Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for NASA’s Office of Space Science, Washington. Northrop Grumman of Redondo Beach, Calif., formerly TRW, Inc., was the prime development contractor for the observatory. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass.

Additional information and images are available at: http://chandra.harvard.edu and http://chandra.nasa.gov

Original Source: Chandra News Release

Where Did the Modern Telescope Come From?

If you think about it, it was just a matter of time before the first telescope was invented. People have been fascinated by crystals for millenia. Many crystals – quartz for instance – are completely transparent. Others – rubies – absorb some frequencies of light and pass others. Shaping crystals into spheres can be done by cleaving, tumbling, and polishing – this removes sharp edges and rounds the surface. Dissecting a crystal begins with finding a flaw. Creating a half-sphere – or crystal segment – creates two different surfaces. Light is gathered by the convex frontface and projected toward a point of convergence by the planar backface. Because crystal segments have severe curves, the point of focus may be very close to the crystal itself. Due to short focal lengths, crystal segments make better microscopes than telescopes.

It wasn’t the crystal segment – but the lens of glass – that made modern telescopes possible. Convex lenses came out of glass ground in a way to correct far-sighted vision. Although both spectacles and crystal segments are convex, far-sighted lenses have less severe curves. Rays of light are only slightly bent from the parallel. Because of this, the point where the image takes form is much farther away from the lens. This creates image scale large enough for detailed human inspection.

The first use of lenses to augment sight can be traced back to the Middle East of the 11th century. An Arabian text (Opticae Thesaurus written by scientist-mathematician Al-hazen) notes that segments of crystal balls could be used to magnify small objects. In the late 13th century, an English monk (possibly referencing Roger Bacon’s Perspectiva of 1267) is said to have created the first practical near-focus spectacles to aid in reading the Bible. It wasn’t until 1440 when Nicholas of Cusa ground the first lens to correct near-sightedness -1. And it would be another four centuries before defects in lens shape itself (astigmatism) would be aided by a set of spectacles. (This was accomplished by the British astronomer George Airy in 1827 some 220 years after another – more famous astronomer – Johann Kepler first accurately described the effect of lenses on light.)

The earliest telescopes took form just after spectacle grinding became well-established as a means to correct both myopia and presbyopia. Because far-sighted lenses are convex, they make good “collectors” of light. A convex lens takes parallel beams from the distance and bends them to a common point of focus. This creates a virtual image in space – one that can be inspected more closely using a second lens. The virtue of a collecting lens is twofold: It combines light together (increasing its intensity) – and amplifies image scale – both to a degree potentially far greater than the eye alone is capable of.

Concave lenses (used to correct near-sightedness) splay light outward and make things appear smaller to the eye. A concave lens can increase the focal length of the eye whenever the eye’s own system (fixed cornea and morphing lens) falls short of focusing an image on the retina. Concave lenses make good eyepieces because they enable the eye to more closely inspect the virtual image cast by a convex lens. This is possible because convergent rays from a collecting lens are refracted toward the parallel by a concave lens. The effect is to show a nearby virtual image as though at a great distance. A single concave lens allows the eye lens to relax as if focused on infinity.

Combining convex and concave lenses was just a matter of time. We can imagine the very first occasion occurring as children toyed with the lens-grinder’s toil of the day – or possibly when the optician felt called to inspect one lens using another. Such an experience must have seemed almost magical: A distant tower instantly looms as if approached at the end of a long stroll; unrecognizable figures are suddenly seen to be close friends; natural boundaries – such as canals or rivers – are leapt over as though Mercury’s own wings were attached to the heals…

Once the telescope came to be, two new optical problems presented themselves. Light collecting lenses create curved virtual images. That curve is slightly “bowl-shaped” with the bottom turned toward the observer. This of course is just the opposite of how the eye itself sees the world. For the eye sees things as though arrayed on a great sphere whose center lies on the retina. So something had to be done to draw perimeter rays back toward the eye. This problem was partially resolved by astronomer Christiaan Huygens in the 1650’s. He did this by combining several lenses together as a unit. The use of two lenses brought more of the peripheral rays from a collecting lens toward the parallel. Huygen’s new eyepiece effectively flattened the image and allowed the eye to achieve focus across a wider field of view. But that field would still induce claustrophobia in most observers of today!

The final problem was more intractable – refracting lenses bend light based on wavelength or frequency. The greater the frequency, the more a particular color of light is bent. For this reason, objects displaying light of various colors (polychromatic light) are not seen at the same point of focus across the electro-magnetic spectrum. Basically lenses act in ways similar to prisms – creating a spread of colors, each with its own unique focal point.

Galileo’s first telescope only solved the problem of getting an eye close enough to magnify the virtual image. His instrument was composed of two lenses separable by a controlled distance to set focus. The objective lens had a less severe curve to collect light and bring it to various points of focus depending on color-frequency. The smaller lens – possessed of a more severe curve of shorter focal length – allowed Galileo’s observing eye to get close enough to the image to see magnified detail.

But Galileo’s scope could only be brought to focus near the middle of the eyepiece field of view. And focus could only be set based on the dominant color emitted or reflected by whatever Galileo was viewing at the time. Galileo usually observed bright studies – like the Moon, Venus, and Jupiter – using an aperture stop and took some pride in having come up with the idea!

Christiaan Huygens created the first – Huygenian – eyepiece after the time of Galileo. This eyepiece consists of two plano-convex lenses facing the collecting lens – not a single concave lens. The focal plane of the two lenses lies between the objective and eye lens elements. The use of two lenses flattened the curve of the image – but only over a score or so degrees of apparent field of view. Since Huygen’s time, eyepieces have become much more sophisticated. Beginning with this original concept of multiplicity, today’s eyepieces can add another half-dozen or so optical elements rearranged in both shape and position. Amateur astronomers can now purchase eyepieces off the shelf giving reasonably flat fields exceeding 80 degrees in apparent diameter-2.

The third problem – that of chromatically tinged multi-color images – was not solved in telescopy until a working reflector telescope was designed and constructed by Sir Isaac Newton in the 1670’s. That telescope eliminated the collecting lens altogether – though it still required the use of a refractory eyepiece (which contributes far less to “false color” than the objective does).

Meanwhile early attempts to fix the refractor were to simply make them longer. Scopes to 140 feet in length were devised. None had especially exorbitant lens diameters. Such spindly dynasaurs required a truly adventurous observer to use – but did “tone down” the color problem.

Despite eliminating color error, early reflectors had problems too. Newton’s scope used a spherically ground speculum mirror. Compared to the aluminum coating of modern reflector mirrors, speculum is a weak performer. At roughly three-quarters the light gathering ability of aluminum, speculum loses about one magnitude in light grasp. Thus the six-inch instrument devised by Newton behaved more like a contemporary 4 inch model. But this is not what made Newton’s instrument hard to sell, it simply provided very poor image quality. And this was due to the use of that spherically ground primary mirror.

Newton’s mirror did not bring all rays of light to common focus. The fault didn’t lay with the speculum – it lay with the shape of the mirror which – if extended 360 degrees – would make a complete circle. Such a mirror is incapable of bringing central light beams to the same point of focus as those nearer the rim. It wasn’t until 1740 when Scotland’s John Short corrected this problem (for on-axis light) by parabolizing the mirror. Short accomplished this in a very practical manner: Since parallel rays nearer the center of a spherical mirror overshoot marginal rays, why not just deepen the center and rein them in?

It wasn’t until the 1850’s that silver replaced speculum as the mirror surface of choice. Of course the more than 1000 parabolic reflectors fabricated by John Short all had speculum mirrors. And silver, like speculum, loses reflectivity rather quickly over time to oxidation. By 1930, the first professional telescopes were being coated with more durable and reflective aluminum. Despite this improvement, small reflectors bring less light to focus than refractors of comparable aperture.

Meanwhile, refractors evolved too. During John Short’s time, opticians figured out something Newton had not – how to get red and green light to merge at a common point of focus by refraction. This was first accomplished by Chester Moor Hall in 1725 and rediscovered a quarter century later by John Dolland. Hall and Dolland combined two different lenses – one convex and other concave. Each consisted of a different glass type (crown and flint) refracting light differently (based on refractive indices). The convex lens of crown glass did the immediate task of collecting light of all colors. This bent photons inward. The negative lens splayed the converging beam slightly outward. Where the positive lens caused red light to overshoot focus, the negative lens caused red to undershoot. Red and green blended and the eye saw yellow. The result was the achromatic refractor telescope – a type favored by many amateur astronomers today for inexpensive, small aperture, wide-field, but – in shorter focal ratios – less than ideal image quality use.

It wasn’t until the mid-nineteenth century that opticians managed to get blue-violet to join red and green at focus. That development initially came out of the use of exotic materials (flourite) as an element in the doublet objectives of high-powered optical microscopes – not telescopes. Three element telescope designs using standard glass types – triplets – solved the problem as well some forty years later (just before the twentieth-century).

Today’s amateur astronomers can choose from a wide assortment of scope types and manufacturers. There is no one scope for all skies, eyes, and celestial studies. Issues of field flatness (particularly with fast Newtonian telescopes), and hefty optical tubes (associated with large refractors) have been addressed by new optical configurations developed in the 1930’s. Instrument types – such as the SCT (Schmidt-Cassegrain telescope) and MCT (Maksutov-Cassegrain telescope) plus newton-esque Schmidt and Maksutov variants and oblique reflectors – are now manufactured in the USA and throughout the world. Each scope type developed to address some valid concern or another related to scope size, bulk, field flatness, image quality, contrast, cost, and portability.

Meanwhile refractors have taken center stage among optophiles – folks wanting the highest possible image quality irrespective of other constraints. Fully apochromatic (color-corrected) refractors provide some of the most stunning images available for optical, photographic, and CCD imaging use. But alas, such models are limited to smaller apertures due to significantly higher costs of materials (exotic low-dispersion crystals & glass), manufacture (up to six optical surfaces must be shaped) and greater load bearing requirements (due to heavy disks of glass).

All of today’s variety in scope types began with the discovery that two lenses of unequal curvature could be held up to the eye to transport human perception over great distances. Like many great technological advances, the modern astronomical telescope emerged out of three fundamental ingredients: Necessity, imagination, and a growing understanding of the way energy and matter interact.

So where did the modern astronomical telescope come from? Certainly the telescope went through a long period of constant improvement. But perhaps, just perhaps, the telescope is at essence a gift of the Universe itself exulting in profound admiration through human eyes, hearts, and minds…

-1 Questions exist as to who first created spectacles correcting far- and near-sighted vsion. It is unlikely that Abu Ali al-Hasan Ibn al-Haitham or Roger Bacon ever used a lens in this way. Confusing the issue of provenance is the question of how spectacles were actually worn. It is likely that the first visual aid was simply held to the eye as a monocle – necessity taking over from there. But would such a primitive method be historically recounted as “the origin of the spectacle”?

-2 The ability of a particular eyepiece to compensate for a necessarily curved virtual image is limited fundamentally by effective focal ratio and scope archetecture. Thus telescopes whose focal length are many times their aperture present less of an instantaneous curve at the “image plane”. Meanwhile scopes that refract light initially (catadioptics as well as refractors) have the advantage of better handling off-axis light. Both factors increase the radius of curvature of the projected image and simplify the eyepiece’s task of presenting a flat field to the eye.

About The Author:
Inspired by the early 1900’s masterpiece: “The Sky Through Three, Four, and Five Inch Telescopes”, Jeff Barbour got a start in astronomy and space science at the age of seven. Currently Jeff devotes much of his time observing the heavens and maintaining the website Astro.Geekjoy.

Swift is Now Fully Operational

The Swift satellite’s Ultraviolet/Optical Telescope (UVOT) has seen first light, capturing an image of the Pinwheel Galaxy, long loved by amateur astronomers as the “perfect” face-on spiral galaxy. The UVOT now remains poised to observe its first gamma-ray burst and the Swift observatory, launched into Earth orbit in November 2004, is now fully operational.

Swift is a NASA-led mission dedicated to the gamma-ray burst mystery. These random and fleeting explosions likely signal the birth of black holes. With the UVOT turned on, Swift now is fully operational. Swift’s two other instruments — the Burst Alert Telescope (BAT) and the X-ray Telescope (XRT) — were turned on over the past several weeks and have been snapping up gamma-ray bursts ever since.

“After many years of effort building the UVOT, it was exciting to point it toward the famous Pinwheel Galaxy, M101,” said Peter Roming, UVOT Lead Scientist at Penn State. “The ultraviolet wavelengths in particular reveal regions of star formation in the galaxy’s wispy spiral arms. But more than a pretty image, this first-light observation is a test of the UVOT’s capabilities.”

Swift’s three telescopes work in unison. The BAT detects gamma-ray bursts and autonomously turns the satellite in seconds to bring the burst within view of the XRT and the UVOT, which provide detailed follow-up observations of the burst afterglow. Although the burst itself is gone within seconds, scientists can study the afterglow for clues about the origin and nature of the burst, much like detectives at a crime scene.

The UVOT serves several important functions. First, it will pinpoint the gamma-ray burst location a few minutes after the BAT detection. The XRT provides a burst position within a 1- to-2-arcsecond range. The UVOT will provide sub-arcsecond precision, a spot on the sky far smaller than the eye of a needle at arm’s length. This information is then relayed to scientists at observatories around the world so that they can view the afterglow with other telescopes.

As the name applies, the UVOT captures the optical and ultraviolet component of the fading burst afterglow. “The ‘big gun’ optical observatories such as Hubble, Keck, and VLT have provided useful data over the years, but only for the later portion of the afterglow,” said Keith Mason, the U.K. UVOT Lead at University College London?s Mullard Space Science Laboratory. “The UVOT isn’t as powerful as these observatories, but has the advantage of observing from the very dark skies of space. Moreover, it will start observing the burst afterglow within minutes, as opposed to the day-long or week-long lag times inherent with heavily used observatories. The bulk of the afterglow fades within hours.”

The ultraviolet portion will be particularly revealing, said Roming. “We know nearly nothing about the ultraviolet part of a gamma-ray burst afterglow,” he said. “This is because the atmosphere blocks most ultraviolet rays from reaching telescopes on Earth, and there have been few ultraviolet telescopes in orbit. We simply haven’t yet reached a burst fast enough with a UV telescope.”

The UVOT’s imaging capability will enable scientists to understand the shape of the afterglow as it evolves and fades. The telescope’s spectral capability will enable detailed analysis of the dynamics of the afterglow, such as the temperature, velocity, and direction of material ejected in the explosion.

The UVOT also will help scientists determine the distance to the closer gamma-ray bursts, within a redshift of 4, which corresponds to a distance of about 11 billion light years. The XRT will determine distances to more distant bursts.

Scientists hope to use the UVOT and XRT to observe the afterglow of short bursts, less than two seconds long. Such afterglows have not yet been seen; it is not clear if they fade fast or simply don’t exist. Some scientists think there are at least two kinds of gamma-ray bursts: longer ones (more than two seconds) that generate afterglows and that seem to be caused by massive star explosions, and shorter ones that may be caused by mergers of black holes or neutron stars. The UVOT and XRT will help to rule out various theories and scenarios.

The UVOT is a 30-centimeter telescope with intensified CCD detectors and is similar to an instrument on the European Space Agency’s XMM-Newton mission. The UVOT is as sensitive as a four-meter optical ground-based telescope. The UVOT’s day-to-day observations, however, will look nothing like M101. Distant and faint gamma-ray burst afterglows will appear as tiny smudges of light even to the powerful UVOT. The UVOT is a joint product of Penn State and the Mullard Space Science Laboratory.

Swift is a medium-class explorer mission managed by NASA Goddard. Swift is a NASA mission with participation of the Italian Space Agency and the Particle Physics and Astronomy Research Council in the United Kingdom. It was built in collaboration with national laboratories, universities and international partners, including Penn State University in Pennsylvania, U.S.A.; Los Alamos National Laboratory in New Mexico, U.S.A.; Sonoma State University in California, U.S.A.; the University of Leicester in Leicester, England; the Mullard Space Science Laboratory in Dorking, England; the Brera Observatory of the University of Milan in Italy; and the ASI Science Data Center in Rome, Italy.

Original Source: Eberly College of Science News Release

Digging on Mars Won’t Be Easy

Imagine this scenario. The year is 2030 or thereabouts. After voyaging six months from Earth, you and several other astronauts are the first humans on Mars. You’re standing on an alien world, dusty red dirt beneath your feet, looking around at a bunch of mining equipment deposited by previous robotic landers.

Echoing in your ears are the final words from mission control: “Your mission, should you care to accept it, is to return to Earth–if possible using fuel and oxygen you mine from the sands of Mars. Good luck!”

It sounds simple enough, mining raw materials from a rocky, sandy planet. We do it here on Earth, why not on Mars, too? But it’s not as simple as it sounds. Nothing about granular physics ever is.

Granular physics is the science of grains, everything from kernels of corn to grains of sand to grounds of coffee. These are common everyday substances, but they can be vexingly difficult to predict. One moment they behave like solids, the next like liquids. Consider a dump truck full of gravel. When the truck begins to tilt, the gravel remains in a solid pile, until at a certain angle it suddenly becomes a thundering river of rock.

Understanding granular physics is essential for designing industrial machinery to handle vast quantities of small solids–like fine Martian sand.

The problem is, even here on Earth “industrial plants don’t work very well because we don’t understand equations for granular materials as well as we understand the equations for liquids and gases,” says James T. Jenkins, professor of theoretical and applied mechanics at Cornell University in Ithaca, N.Y. “That’s why coal-fired power plants operate at low efficiencies and have higher failure rates compared to liquid-fuel or gas-fired power plants.”

So “do we understand granular processing well enough to do it on Mars?” he asks.

Let’s start with excavation: “If you dig a trench on Mars, how steep can the sides be and remain stable without caving in?” wonders Stein Sture, professor of civil, environmental, and architectural engineering and associate dean at the University of Colorado in Boulder. There’s no definite answer, not yet. The layering of dusty soils and rock on Mars isn’t well enough known.

Some information about the mechanical composition of the top meter or so of Martian soils could be gained by ground-penetrating radar or other sounding devices, Sture points out, but much deeper and you “probably need to take core samples.” NASA’s Phoenix Mars lander (landing 2008) will be able to dig trenches about a half-meter deep; the 2009 Mars Science Laboratory will be able to cut out rock cores. Both missions will provide valuable new data.

To go even deeper, Sture (in connection with the University of Colorado’s Center for Space Construction) is developing innovative diggers whose business ends vibrate into soils. Agitation helps break cohesive bonds holding compacted soils together and can also help mitigate the risk of soils collapsing. Machines like these might one day go to Mars, too.

Another problem is “hoppers”–the funnels miners use to guide sand and gravel onto conveyor belts for processing. Knowledge of Martian soils would be vital in designing the most efficient and maintenance-free hoppers. “We don’t understand why hoppers jam,” Jenkins says. Jams are so frequent, in fact, that “on Earth, every hopper has a hammer close by.” Banging on the hopper frees the jam. On Mars, where there would be only a few people around to tend equipment, you’d want hoppers to work better than that. Jenkins and colleagues are researching why granular flows jam.

And then there’s transportation: The Mars rovers Spirit and Opportunity have had little trouble driving miles around their landing sites since 2004. But these rovers are only about the size of an average office desk and only about as massive as an adult. They’re go-carts compared to the massive vehicles possibly needed for transporting tons of Martian sand and rock. Bigger vehicles are going to have a tougher time getting around.

Sture explains: As early as the 1960s when scientists were first studying possible solar-powered rovers for negotiating loose sands on the Moon and other planets, they calculated “that the maximum viable continuous pressure for rolling contact pressure over Martian soils is only 0.2 pounds per square inch (psi),” especially when traveling up or down slopes. This low figure has been confirmed by the behavior of Spirit and Opportunity.

A rolling contact pressure of only 0.2 psi “means that a vehicle has to be light-weight or has to have a way of effectively distributing the load to many wheels or tracks. Reducing contact pressure is crucial so the wheels don’t dig into soft soil or break through duricrusts [thin sheets of cemented soils, like the thin crust on windblown snow on Earth] and get stuck.”

That requirement implies that a vehicle for moving heavier loads–people, habitats, equipment–might be “a huge Fellini-type thing with wheels 4 to 6 meters (12 to 18 feet) in diameter,” says Sture, referring to the famous Italian director of surreal films. Or it might have enormous open-mesh metal treads like a cross between highway-construction backhoes on Earth and the lunar rover used during the Apollo program on the Moon. Thus, tracked or belted vehicles seem promising for carrying large payloads.

A final challenge facing granular physicists is to figure out how to keep equipment operating through Mars’ seasonal dust storms. Martian storms whip fine dust through the air at velocities of 50 m/s (100+ mph), scouring every exposed surface, sifting into every crevice, burying exposed structures both natural and manmade, and reducing visibility to meters or less. Jenkins and other investigators are studying the physics of aeolian [wind] transporting of sand and dust on Earth, both to understand the formation and moving of dunes on Mars, and also to ascertain what sites for eventual habitats might be best protected from prevailing winds (for example, in the lee of large rocks).

Returning to Jenkins’s big question, “do we understand granular processing well enough to do it on Mars?” The unsettling answer is: we don’t yet know.

Working with imperfect knowledge is okay on Earth because, usually, no one suffers much from that ignorance. But on Mars, ignorance could mean reduced efficiency or worse preventing the astronauts from mining enough oxygen and hydrogen to breathe or use for fuel to return to Earth.

Granular physicists analyzing data from the Mars rovers, building new digging machines, tinkering with equations, are doing their level best to find the answers. It’s all part of NASA’s strategy to learn how to get to Mars … and back again.

Original Source: Science@NASA