Carnival of Space #174

Carnival of Space. Image by Jason Major.

[/caption]

A little late posting this one, but here’s this week’s Carnival of Space, hosted by David Portree over at Beyond Apollo.

Click here to read the Carnival of Space #174.

And if you’re interested in looking back, here’s an archive to all the past Carnivals of Space. If you’ve got a space-related blog, you should really join the carnival. Just email an entry to [email protected], and the next host will link to it. It will help get awareness out there about your writing, help you meet others in the space community – and community is what blogging is all about. And if you really want to help out, let Fraser know if you can be a host, and he’ll schedule you into the calendar

Virtual Observatory Discovers New Cataclysmic Variable

Simulation of Intermediate Polar CV star
Simulation of Intermediate Polar CV star (Dr Andy Beardmore, Keele University)

[/caption]

In my article two weeks ago, I discussed how data mining large surveys through online observatories would lead to new discoveries. Sure enough, a pair of astronomers, Ivan Zolotukhin and Igor Chilingarian using data from the Virtual Observatory, has announced the discovery of a cataclysmic variable (CV).


Cataclysmic variables are often called “novae”. However, they’re not a single star. These stars are actually binary systems in which their interactions cause large increases in brightness as matter is accreted from a secondary (usually post main-sequence) star, onto a white dwarf. The accretion of matter piles up on the surface until the it reaches a critical density and undergoes a brief but intense phase of fusion increasing the brightness of the star considerably. Unlike type Ia supernovae, this explosion doesn’t meet the critical density required to cause a core collapse.

The team began by considering a list of 107 objects from the Galactic Plane Survey conducted by the Advanced Satellite for Cosmology and Astrophysics (ASCA, a Japanese satellite operating in the x-ray regime). These objects were exceptional x-ray emitters that had not yet been classified. While other astronomers have done targeted investigations of individual objects requiring new telescope time, this team attempted to determine whether any of the odd objects were CVs using readily available data from the Virtual Observatory.

Since the objects were all strong x-ray sources, they all met at least one criteria of being a CV. Another was that CV stars often are strong emitters for Hα since the eruptions often eject hot hydrogen gas. To analyze whether or not any of the objects were emitters in this regime, the astronomers cross referenced the list of objects with data from the Isaac Newton Telescope Photometric Hα Survey of the northern Galactic plane (IPHAS) using a color-color diagram. In the field of view of the IPHAS survey that overlapped with the region from the ASCA image for one of the objects, the team found an object that emitted strongly in the Hα. But in such a dense field and with such different wavelength regimes, it was difficult to identify the objects as the same one.

To assist in determining if the two interesting objects were indeed the same, or whether they just happened to lie nearby, the pair turned to data from Chandra. Since Chandra has much smaller uncertainty in the positioning (0.6 arcsecs), the pair was able to identify the object and determine that the interesting object from IPHAS was indeed the same one from the ASCA survey.

Thus, the object passed the two tests the team had devised for finding cataclysmic variables. At this point, followup observation was warranted. The astronomers used the 3.5-m Calar Alto telescope to conduct spectroscopic observations and confirmed that the star was indeed a CV. In particular, it looked to be a subclass in which the primary white dwarf star had a strong enough magnetic field to disrupt the accretion disk and the point of contact is actually over the poles of the star (this is known as a intermediate polar CV).

This discovery is an example of how discoveries are just waiting to happen with data that’s already available and sitting in archives, waiting to be explored. Much of this data is even available to the public and can be mined by anyone with the proper computer programs and know-how. Undoubtedly, as organization of these storehouses of data becomes organized in more user friendly manners, additional discoveries will be made in such a manner.

Understanding the Unusual LCROSS Ejecta Plume

Solid impacts send debris to the side (left), whereas hollow impacts result in a high-angle ejecta plume (right). The LCROSS impact was an emptied rocket and acted like a hollow projectile. Figure shows parts of a high-speed image sequence from experiments made at the Ames Vertical Gun Range at NASA's Ames Research Center, Moffett Field, Calif. Image credit: Brown University/Peter H. Schultz and Brendan Hermalyn, NASA/Ames Vertical Gun Range.

LCROSS was an unusual mission, in that it relied on an impact in order to study a planetary body. Not only was the mission unusual, so was the ejecta plume produced by slamming a hollow Centaur rocket booster into the Moon.

“A normal impact with a solid impactor throws debris out more than up, like an inverted lampshade that gets wider and wider as it goes out,” said Pete Schultz, from Brown University and a member of the LCROSS science team. “But the configuration of a hollow impactor — the empty rocket booster — created a plume that had both a low angle plume but more importantly, also a really prominent high angle plume that shot almost straight up.”

This high plume elevated the debris enough so it was illuminated by sunlight, and could be studied by spacecraft.

Even though the plume wasn’t seen from Earth, as was advertised prior to the impact, it was seen by the both the LCROSS shepherding spacecraft and the Lunar Reconnaissance Orbiter. Using the spent Centaur was not so much by mission design as using what was available. But it turned out to be a great choice.

“I think we were quite fortunate,” Schultz told Universe Today in a phone interview this week. “I think another design, and we may have gotten a very different result. Not much debris may have come up into the sunlight and the plume would have been very temporary.”

In order for the debris to get high enough to come into sunlight, it had to rise up about a half mile above the bottom of the crater.

“To put this into perspective,” said Schultz, “we had to throw debris up twice the height of the Sears Tower, the tallest building in the US. Now the Moon has less gravity, so if we bring it back down to Earth and compare it, it is like trying to throw a ball to the top of the Washington Monument. So there is a lot of gravity to overcome, and it turns out that this impact did it because we used a hollow impactor.”

When the rocket booster hit and the crater began to form, the lunar surface collapsed and shot upwards – almost like a jet – towards the sunlight, carrying with it the volatiles that had been trapped in the regolith.

In order to figure out what the impact was going to look like, Schultz and his team, which included graduate student Brendan Hermalyn, did small scale impacts and modeling. Their tests were only done a couple of months before the actual impact, and used small half-inch projectiles into different surfaces.

“Most impacts, when we model them, we assume the impactors are solid,” Schultz said. “We did experiments, with both solid and hollow projectiles, and when we used the hollow projectile, we had a real surprise. We not only saw the debris moving outward, but also upward.”

“We really didn’t know exactly what we were going to see in the actual LCROSS impact, but our tests explained a lot,” Schultz continued, “explaining why we saw what we did and why we saw the plume for such a long time. If it had been coming out like an inverted lampshade or a funnel expanding, the debris would have come up and gone back down, and probably would have been done within about 20 seconds. Instead, it just kept on coming.”

But there were some expected moments. As the LCROSS shepherding spacecraft approached the lunar surface, Tony Colaprete and the team readjusted the exposures on the cameras and the team was able to actually see the surface of the Moon in the final seconds before impact.

“That was great,” Schultz said. “That means we got to see the crater, we were able to get an estimate on how big the crater was, and it made sense with what our predictions had said. But we were also able to see the remnants of this high angle plume still returning to the surface. This must have been shot almost straight up into space, and was now coming back to the Moon. We saw it as a very diffuse cloud, and saw the remaining portions of the regolith coming back down, like a fountain. To me, that was the most exciting part.”

Schultz said he was nervous during the impact.

“I have to confess, we were on pins and needles,” he said, “as this was something much bigger than the experiments of using half inch projectiles and we didn’t know if it was going to scale up. We were dealing with something that looked like schoolbus with no children aboard that was slamming into the Moon and we didn’t know if that was going to behave in the same way as our smaller models.”

And even though the plume did act like the models, there were plenty of surprises — both in the impact and what has now been discovered to exist in Cabeus Crater.

“We knew when it was going to hit the surface – we know how fast how we were going and where we were above the surface — and it turned out there was a delay before we saw the flash and that was really a surprise,” Schultz said. “It was about a half second delay and then it took about a third of second delay before it began to rise and get brighter. The whole thing took seven-tenths of a second before it began to get bright. That is hallmark of a fluffy surface.”

Schultz said they know that it was likely a “fluffy” surface from the experiments and modeling, and from comparisons with the Deep Impact mission, for which he was a co-investigator.

“One of the first things we realized was that this is not your normal regolith — what you usually think of for the Moon,” Schultz said. “We watched the flash, and we looked for what type of spectra we saw. The spectra has the fingerprints of the composition of the elements and compounds. We were expecting because of the low speed we actually wouldn’t get to see much. But instead we immediately got a couple of hits, we got to see a sudden emission of OH, which is a characteristic at this wavelength of a byproduct of heating of water. Then the next 2-second exposure was when things started emerging, the overall spectra got brighter which meant we were seeing more dust. But then we saw this big giant peak of sodium, just like a beacon, a very bright sodium line.”

And then there were two other lines that were very odd. “The best association we could find that is was silver,” said Schultz. “That was a surprise. Then all these other emission lines started emerging as more material got into sunlight. This suggests that we were throwing the dust into the sunlight, and the volatiles that had been frozen in time, literally, in the shadows of Cabeus were heating up and and being released.”

Some of these compounds included not only water and OH, but also things like carbon monoxide, carbon dioxide, and methane, “things that we don’t think of when we talk about the Moon,” said Schultz. “Those are compounds we think of when we think about comets, so now we are in a position that maybe what we are seeing at the poles are the result of a long history of impacts that bring with them a lot of this type of material.” (Read our interview with Tony Colaprete for more about the recent LCROSS results.)

But no one is sure how the Moon can hold onto these volatiles and how they end up in the polar craters.

To figure that out, Schultz said more missions to the Moon are needed.

“Even though the Apollo astronauts were there, we’re now finding things 40 years later that are making our heads snap from all this the new information,” Schultz said. “It goes to show you, you can visit and think you know a place, but you have to go back and maybe even live there.”

Schultz said that as an experimentalist, one can never feel smug, but seeing how the actual plume behaved just like their models, he and his team were very happy. “Experiments are letting nature teach you lessons and that is why they are very interesting to do. We are humbled almost daily.”

Water on the Moon and Much, Much More: Latest LCROSS Results

An image of debris, ejected from Cabeus crater and into the sunlight, about 20 seconds after the LCROSS impact. The inset shows a close-up with the direction of the sun and the Earth. Image courtesy of Science/AAAS

[/caption]

A year ago, NASA successfully slammed a spent Centaur rocket into Cabeus Crater, a permanently shadowed region at the lunar South Pole. The “shepherding” LCROSS (Lunar Crater Observation and Sensing Satellite) spacecraft followed close on the impactor’s heels, monitoring the resulting ejecta cloud to see what materials could be found inside this dark, unstudied region of the Moon. Today, the LCROSS team released the most recent findings from their year-long analysis, and principal investigator Tony Colaprete told Universe Today that LCROSS found water and much, much more. “The ‘much more’ is actually as interesting as the water,” he said, “but the combination of water and the various volatiles we saw is even more interesting — and puzzling.”

The 2400 kg (5200 pound) Centaur rocket created a crater about 25 to 30 meters wide, and the LCROSS team estimates that somewhere between 4,000 kilograms (8,818 pounds) to 6,000 kilograms (13,228 pounds) of debris was blown out of the dark crater and into the sunlit LCROSS field of view. The impact created both a low angle and a high angle ejecta cloud. (Read more about the unusual plume in our interview with LCROSS’s Pete Schultz).

The LCROSS team was able to measure a substantial amount of water and found it in several forms. “We measured it in water vapor,” Colaprete said, “and much more importantly in my mind, we measured it in water ice. Ice is really important because it talks about certain levels of concentration.”

With a combination of near-infrared, ultraviolet and visible spectrometers onboard the shepherding spacecraft, LCROSS found about 155 kilograms (342 pounds) of water vapor and water ice were blown out of crater and detected by LCROSS. From that, Colaprete and his team estimate that approximately 5.6 percent of the total mass inside Cabeus crater (plus or minus 2.9 percent) could be attributed to water ice alone.

Colaprete said finding ice in concentrations – “blocks” of ice — is extremely important. “It means there has to be some kind of process by which it is being enhanced, enriched and concentrated so that you have what is called a critical cluster that allows germ formation and crystalline growth and condensation of ice. So that data point is important because now we have to ask that question, how did it become ice?” he said.

In with the water vapor, the LCROSS team also saw two ‘flavors’ of hydroxyl. “We saw one that was emitting as it if it was just being excited,” Colaprete said, “which means this OH could have come from grains — it could be the adsorbed OH we saw in the M Cubed data, as it was released or liberated from a hot impact and coming up into view. We also see an emission from OH that is called prompt emission, which is unique to the emission you get when OH is formed through photolysis.”

Then came the ‘much more.’ Between the LCROSS instruments, the Lunar Reconnaissance Orbiter’s observations – and in particular the LAMP instrument (Lyman Alpha Mapping Project) – the most abundant volatile in terms of total mass was carbon monoxide, then was water, the hydrogen sulfide. Then was carbon dioxide, sulfur dioxide, methane, formaldehyde, perhaps ethylene, ammonia, and even mercury and silver.

“So there’s a variety of different species, and what is interesting is that a number of those species are common to water,” Colaprete said. “So for example the ammonia and methane are at concentrations relative to the total water mass we saw, similar to what you would see in a comet.”

The LCROSS NIR spectrometer field of view (green circle), projected against the target area in the crater Cabeus. Credit: Colaprete, et al.

Colaprete said the fact that they see carbon monoxide as more abundant than water and that hydrogen sulfide exists as a significant fraction of the total water, suggests a considerable amount of processing within the crater itself.

“There is likely chemistry occurring on the grains in the dark crater,” he explained. “That is interesting because how do you get chemistry going on at 40 to 50 degrees Kelvin with no sunlight? What is the energy — is it cosmic rays, solar wind protons working their way in, is it other electrical potentials associated with the dark and light regions? We don’t know. So this is, again, a circumstance where we have some data that doesn’t make entirely a lot of sense, but it does match certain findings elsewhere, meaning it does look cometary in some extent, and does look like what we see in cold grain processes in interstellar space.”

Colaprete said that finding many of these compounds came as a surprise, such as the carbon monoxide, mercury, and particularly methane and molecular hydrogen. “We have a lot of questions because of the appearance of these species, “ he said.

There were also differences in the abundances of all the species over the time – the short 4 minutes of time when they were able to monitor the ejecta cloud before the shepherding spacecraft itself impacted the Moon. “We actually can de-convolve, if you will, the release of the volatiles as a function of time as we look more and more closely at the data,” he said. “And this is important because we can relate what was released at the initial impact, what was released as grains sublimed in sunlight, and what was “sweated out” of the hot crater. So that’s where we’re at right now, it’s not just, ‘hey we saw water, and we saw a significant amount.’ But as a function of time there are different parts coming out, and different ‘flavors’ of water, so we are unraveling it to a finer and finer detail. That is important, since we need to understand more accurately what we actually impacted into. That is really what we are interested in, is what are the conditions we impacted into, and how is the water distributed in the soil in that dark crater.”

So the big question is, how did all these different compounds get there? Cometary impacts seem to offer the best answer, but it could also be outgassing from the early Moon, solar wind delivery, another unknown process, or a combination.

“We don’t understand it at all, really,” Colaprete said. “The analysis and the modeling is really in its infancy. It is just beginning, and now we finally have some data from all these various missions to constrain the models and really allow us to move beyond speculation.”

LCROSS was an “add-on” mission to the LRO launch, and the mission had several unknowns. Colaprete said his biggest fear going into the impact and going into the results was that they wouldn’t get any data. “I had fears that something would happen, there would be no ejecta, no vapor and we’d just disappear into this black hole,” he confessed. “And that would have been unfortunate, even though it would have been a data point and we would have had to figure out how the heck that would happen.”

But they did get data, and in an abundance that — like any successful mission — offers more questions than answers. “It really was exploration,” Colaprete said. “We were going somewhere we had absolutely never gone before, a permanently shadowed crater in the poles of the Moon, so we knew going into this that whatever we got back data-wise would probably leave us scratching our heads.”

Additional source: Science

The Strange Warm Spot of upsilon Andromedae b

The warmest part of upsilon Andromedae b is not directly under the light coming from its host star, as would be expected. Image Credit: NASA/JPL-Caltech

[/caption]

If you set a big black rock outside in the Sun for a few hours, then go and touch it, you’d expect the warmest part of the rock to be that which was facing the Sun, right? Well, when it comes to exoplanets, your expectations will be defied. A new analysis of a well-studied exoplanetary system reveals that one of the planets – which is not a big black rock, but a Jupiter-like ball of gas – has its warmest part opposite that of its star.

The system of Upsilon Andromedae, which lies 44 light years away from the Earth in the constellation Andromeda, is a much studied system of planets that orbit around a star a little more massive and slightly hotter than our Sun.

The closest planet to the star, upsilon Andromeda b, was the first exoplanet to have its temperature taken by The Spitzer Space Telescope. As we reported back in 2006, upsilon Andromeda b was thought to be tidally locked to the star and show corresponding temperature changes at it went around its host star. That is, as it went behind the star from our perspective, the face was warmer than when it was in front of the star from our perspective. Simple enough, right? These original results were published in a paper in Science on October 27th, 2006, available here.

As it turns out, this temperature change scenario is not the case. UCLA Professor of Physics and Astronomy Brad Hansen, who is a co-author on both the 2006 paper and updated results, explains, “The original report was based on just a few hours of data, taken early in the mission, to see whether such a measurement was even possible (it is close to the limit of the expected performance of the instrument). Since the observations suggested it was possible to detect, we were awarded a larger amount of time to do it in more detail.”

Observations of upsilon Andromedae b were taken with the Spitzer again in February of 2009. Once the astronomers were able to study the planet more, they discovered something odd – just how warm the planet was when it passed in front of the star from our perspective was a lot warmer than when it passed behind, just the opposite of what one would expect, and opposite of the results they originally published. Here’s a link to an animation that helps explain this strange feature of the planet.

What the astronomers discovered – and have yet to explain fully – is that there is a “warm spot” about 80 degrees opposite of the face of the planet that is pointed towards the star. In other words, the warmest spot on the planet is not on the side of the planet that is receiving the most radiation from the star.

This in itself is not a novelty. Hansen said, “There are several exoplanets observed with warm spots, including some whose spots are shifted relative to the location facing the star (an example is the very well studied system HD189733b). The principal difference in this case is that the shift we observe is the largest known.”

Upsilon Andromedae b does not transit in front of its star from our vantage point on the Earth. Its orbit is inclined by about 30 degrees, so it appears to be passing “below” the star as it comes around the front. This means that astronomers cannot use the transit method of exoplanetary study to get a handle on its orbit, but rather measure the tug that the planet exerts on the star. It has been determined that upsilon Andromedae b orbits about every 4.6 days, has a mass 0.69 that of Jupiter and is about 1.3 Jupiter radii in diameter. To get a better idea of the whole system of upsilon Andromedae, see this story we ran earlier this year.

So what, exactly, could be causing this bizarrely placed warm spot on the planet? The paper authors suggest that equatorial winds – much like those on Jupiter – could be transferring heat around the planet.

A graph and visual representation of the hot spot as the planet orbits the star upsilon Andromedae. Image credit: NASA/JPL-Caltech/UCLA

Hansen explained, “At the sub-stellar point (the one closest to the star) the amount of radiation being absorbed from the star is highest, so the gas there is heated more. It will therefore have a tendency to flow away from the hot region towards cold regions. This, combined with rotation will give a “trade wind”-like structure to the gas flow on the planet… The big uncertainty is how that energy is eventually dissipated. The fact that we observe a hot spot at roughly 90 degrees suggests that this occurs somewhere near the “terminator” (the day/night edge). Somehow the winds are flowing around from the sub-stellar point and then dissipating as they approach the night side. We speculate that this may be from the formation of some kind of shock front.”

Hansen said that they are unsure just how large this warm spot is. “We have only a very crude measure of this, so we have modeled as basically two hemispheres – one hotter than the other. One could make the spot smaller and make it correspondingly hotter and you would get the same effect. So, one can trade off spot size versus temperature contrast while still matching the observations.”

The most recent paper, which is co-authored by members from the United States and the UK, will appear in the Astrophysical Journal. If you’d like to go outside and see the star upsilon Andromedae,here’s a star chart.

Source: JPL Press Release, Arxiv here and here , email interview with Professor Brad Hansen.

VLT, Hubble Smash Record for Eyeing Most Distant Galaxy

Planck Time
The Universe. So far, no duplicates found@

[/caption]

Using the Hubble Space Telescope and the Very Large Telescope (VLT), astronomers have looked back to find the most distant galaxy so far. “We are observing a galaxy that existed essentially when the Universe was only about 600 million years old, and we are looking at this galaxy – and the Universe – 13.1 billion years ago,” said Dr. Matt Lehnert from the Observatoire de Paris, who is the lead author of a new paper in Nature. “Conditions were quite different back then. The basic picture in which this discovery is embedded is that this is the epoch in which the Universe went from largely neutral to basically ionized.”

Lehnert and an international team used the VLT to make follow-up observations of the galaxy — called UDFy-38135539 – which Hubble observations in 2009 had revealed. The astronomers analyzed the very faint glow of the galaxy to measure its distance — and age. This is the first confirmed observations of a galaxy whose light is emerging from the reionization of the Universe.

The reionization period is about the farthest back in time that astronomers can observe. The Big Bang, 13.7 billion years ago, created a hot, murky universe. Some 400,000 years later, temperatures cooled, electrons and protons joined to form neutral hydrogen, and the murk cleared. Some time before 1 billion years after the Big Bang, neutral hydrogen began to form stars in the first galaxies, which radiated energy and changed the hydrogen back to being ionized. Although not the thick plasma soup of the earlier period just after the Big Bang, this galaxy formation started the reionization epoch, clearing the opaque hydrogen fog that filled the cosmos at this early time.

A simulation of galaxies during the era of deionization in the early Universe. Credit: M. Alvarez, R. Kaehler, and T. Abel

“The whole history of the Universe is from the reionization,” Lehnert said during an online press briefing. “The dark matter that pervades the Universe began to drag the gas along and formed the first galaxies. When the galaxies began to form, it reionized the Universe.”

UDFy-38135539 is about 100 million light-years farther than the previous most distant object, a gamma-ray burst.

Studying these first galaxies is extremely difficult, Lehnert said, as the dim light falls mostly in the infrared part of the spectrum because its wavelength has been stretched by the expansion of the Universe — an effect known as redshift. During the time of less than a billion years after the Big Bang, the hydrogen fog that pervaded the Universe absorbed the fierce ultraviolet light from young galaxies.

The new Wide Field Camera 3 on the NASA/ESA Hubble Space Telescope discovered several candidate objects in 2009, and with 16 hours of observations using the VLT, the team was able to was used to detect the very faint glow from hydrogen at a redshift of 8.6.

The team used the SINFONI infrared spectroscopic instrument on the VLT and a very long exposure time.

“Measuring the redshift of the most distant galaxy so far is very exciting in itself,” said co-author Nicole Nesvadba (Institut d’Astrophysique Spatiale), “but the astrophysical implications of this detection are even more important. This is the first time we know for sure that we are looking at one of the galaxies that cleared out the fog which had filled the very early Universe.”

One of the surprising things about this discovery is that the glow from UDFy-38135539 seems not to be strong enough on its own to clear out the hydrogen fog. “There must be other galaxies, probably fainter and less massive nearby companions of UDFy-38135539,” said co-author Mark Swinbank from Durham University, “which also helped make the space around the galaxy transparent. Without this additional help the light from the galaxy, no matter how brilliant, would have been trapped in the surrounding hydrogen fog and we would not have been able to detect it.”

Sources: ESO, press briefing

The Tug of Exoplanets on Exoplanets

Earlier this year, I wrote about how an apparent change in the orbital characteristics of a planet around TrES-2b may be indicative of a new planet, much in the same way perturbations of Uranus revealed the presence of Neptune. A follow up study was conducted by astronomers at the University of Arizona and another study on planet WASP-3b also enters the fray.

The new study by the University of Arizona team, observed the TrES-2b planet on June 15, 2009, just seven orbits after the observations reported by Mislis et al. that reported the change in orbit. The findings of Mislis et al. were that, not only was the onset of the transit offset, but the angle of inclination was slowly changing. Yet the Arizona team found their results matched the previous data sets and found no indication of either of these effects (within error) when compared to the timing predictions from other, previous studies.

Additionally, an unrelated study led by Ronald Gilliland of the Space Telescope Science Institute discussing various sampling modes of the Kepler telescope used the TrES-2b system as an example and had coincidentally preceded and overlapped on of the observations made by Mislis et al. This study too found no variation in orbital characteristics of the planet.

Another test they applied to determine if the orbit was changing was the depth of the eclipse. Mislis’ team predicted that the trend would slowly cause the plane of the orbit to change such that, eventually, the planet would no longer eclipse the star. But before that happened, there should be a period of time where the area blocked by the planet was covering less and less of the star. If that were to happen, the amount of light blocked would decrease as well until it vanished all together. The Arizona team compared the depth of the eclipses they observed with the earlier observations and found that they observed no change here either.

So what went wrong with the data from Mislis et al.? One possibility is that they did not properly account for differences in their filter when compared with that of the original observations by which the transit timing was determined. Stars have a feature known as limb darkening in which the edges appear darker due to the angle at which light is being released. Some light is scattered in the atmosphere of the star and since the scattering is wavelength dependent, so too is the effects of the limb darkening. If a photometric filter is observing in a slightly different part of the spectrum, it would read the effects differently.

While these findings have discredited the notion that there are perturbations in the TrES-2b system, the notion that we can find exoplanets by their effects on known ones is still an attractive one that other astronomers are considering. One team, lead by G. Maciejewski has launched an international observing campaign to discover new planets by just this method. The campaign uses a series of telescopes ranging from 0.6 – 2.2 meters located around the world to frequently monitor stars with known transiting planets. And this study may have just had its first success.

In a paper recently uploaded to arXiv, the team announced that variations in the timing of transits for planet WASP-3b indicate the presence of a 15 Earth mass planet in a 2:1 orbital resonance with the known one. Currently, the team is working to make followup observations of their own including radial velocity measurements with the Hobby-Eberly Telescope owned by the University of Texas, Austin. With any luck, this new method will begin to discover new planets.

UPDATE: It looks like Maciejewski’s team has announced another potential planet through timing variations. This time around WASP-10.

NASA’s Ames Director Announces “100 Year Starship”

NASA's Ames Center Director, Simon "Pete" Worden has announced that development of next-generation propulsion technologies are underway. Image Credit: NASA

[/caption]

The Director of NASA’s Ames Center, Pete Worden has announced an initiative to move space flight to the next level. This plan, dubbed the “Hundred Year Starship,” has received $100,000 from NASA and $ 1 million from the Defense Advanced Research Projects Agency (DARPA). He made his announcement on Oct. 16. Worden is also hoping to include wealthy investors in the project. NASA has yet to provide any official details on the project.

Worden also has expressed his belief that the space agency was now directed toward settling other planets. However, given the fact that the agency has been redirected toward supporting commercial space firms, how this will be achieved has yet to be detailed. Details that have been given have been vague and in some cases contradictory.

The Ames Director went on to expound how these efforts will seek to emulate the fictional starships seen on the television show Star Trek. He stated that the public could expect to see the first prototype of a new propulsion system within the next few years. Given that NASA’s FY 2011 Budget has had to be revised and has yet to go through Appropriations, this time estimate may be overly-optimistic.

One of the ideas being proposed is a microwave thermal propulsion system. This form of propulsion would eliminate the massive amount of fuel required to send crafts into orbit. The power would be “beamed” to the space craft. Either a laser or microwave emitter would heat the propellant, thus sending the vehicle aloft. This technology has been around for some time, but has yet to be actually applied in a real-world vehicle.

The project is run by Dr. Kevin L.G. Parkin who described it in his PhD thesis and invented the equipment used. Along with him are David Murakami and Creon Levit. One of the previous workers on the program went on to found his own company in the hopes of commercializing the technology used.

For Worden, the first locations that man should visit utilizing this revolutionary technology would not be the moon or even Mars. Rather he suggests that we should visit the red planet’s moons, Phobos and Deimos. Worden believes that astronauts can be sent to Mars by 2030 for around $10 billion – but only one way. The strategy appears to resemble the ‘Faster-Better-Cheaper’ craze promoted by then-NASA Administrator Dan Goldin during the 1990s.

DARPA is a branch of the U.S. Department of Defense whose purview is the development of new technology to be used by the U.S. military. Some previous efforts that the agency has undertaken include the first hypertext system, as well as other computer-related developments that are used everyday. DARPA has worked on space-related projects before, working on light-weight satellites (LIGHTSAT), the X-37 space plane, the FALCON Hypersonic Cruise Vehicle (HCV) and a number of other programs.

The Defense Advanced Research Projects Agency or DARPA has been involved with a number of advanced technology projects. Image Credit: DARPA

Source: Kurzweil

First Law of Thermodynamics

First Law of Thermodynamics
First Law of Thermodynamics

[/caption]

Ever wonder how heat really works? Well, not too long ago, scientists, looking to make their steam engines more efficient, sought to do just that. Their efforts to understand the interrelationship between energy conversion, heat and mechanical work (and subsequently the larger variables of temperature, volume and pressure) came to be known as thermodynamics, taken from the Greek word “thermo” (meaning “heat”) and “dynamis” (meaning force). Like most fields of scientific study, thermodynamics is governed by a series of laws that were realized thanks to ongoing observations and experiments. The first law of thermodynamics, arguably the most important, is an expression of the principle of conservation of energy.

Consistent with this principle, the first law expresses that energy can be transformed (i.e. changed from one form to another), but cannot be created or destroyed. It is usually formulated by stating that the change in the internal energy (ie. the total energy) contained within a system is equal to the amount of heat supplied to that system, minus the amount of work performed by the system on its surroundings. Work and heat are due to processes which add or subtract energy, while internal energy is a particular form of energy associated with the system – a property of the system, whereas work done and heat supplied are not. A significant result of this distinction is that a given internal energy change can be achieved by many combinations of heat and work.

This law was first expressed by Rudolf Clausius in 1850 when he said: “There is a state function E, called ‘energy’, whose differential equals the work exchanged with the surroundings during an adiabatic process.” However, it was Germain Hess (via Hess’s Law), and later by Julius Robert von Mayer who first formulated the law, however informally. It can be expressed through the simple equation E2 – E1 = Q – W, whereas E represents the change in internal energy, Q represents the heat transfer, and W, the work done. Another common expression of this law, found in science textbooks, is ?U=Q+W, where ? represents change and U, heat.

An important concept in thermodynamics is the “thermodynamic system”, a precisely defined region of the universe under study. Everything in the universe except the system is known as the surroundings, and is separated from the system by a boundary which may be notional or real, but which by convention delimits a finite volume. Exchanges of work, heat, or matter between the system and the surroundings take place across this boundary. Thermodynamics deals only with the large scale response of a system which we can observe and measure in experiments (such as steam engines, for which the study was first developed).

We have written many articles about the First Law of Thermodynamics for Universe Today. Here’s an article about entropy, and here’s an article about Hooke’s Law.

If you’d like more info on the First Law of Thermodynamics, check out NASA’s Glenn Research Center, and here’s a link to Hyperphysics.

We’ve also recorded an episode of Astronomy Cast all about planet Earth. Listen here, Episode 51: Earth.

Sources:
http://en.wikipedia.org/wiki/First_law_of_thermodynamics
http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/firlaw.html
http://en.wikipedia.org/wiki/Internal_energy
http://www.grc.nasa.gov/WWW/K-12/airplane/thermo1.html
http://en.wikipedia.org/wiki/Thermodynamics
http://en.wikipedia.org/wiki/Laws_of_thermodynamics

What are Earthquake Fault Lines?

False-color composite image of the Port-au-Prince, Haiti region, taken Jan. 27, 2010 by NASA’s UAVSAR airborne radar. The city is denoted by the yellow arrow; the black arrow points to the fault responsible for the Jan. 12 earthquake. Image credit: NASA
False-color composite image of the Port-au-Prince, Haiti region, taken Jan. 27, 2010 by NASA’s UAVSAR airborne radar. The city is denoted by the yellow arrow; the black arrow points to the fault responsible for the Jan. 12 earthquake. Image credit: NASA

Every so often, in different regions of the world, the Earth feels the need to release energy in the form of seismic waves. These waves cause a great deal of hazards as the energy is transferred through the tectonic plates and into the Earth’s crust. For those living in an area directly above where two tectonic plates meet, the experience can be quite harrowing!

This area is known as a fault, or a fracture or discontinuity in a volume of rock, across which there is significant displacement. Along the line where the Earth and the fault plane meet, is what is known as a fault line. Understanding where they lie is crucial to our understanding of Earth’s geology, not to mention earthquake preparedness programs.

Definition:

In geology, a fault is a fracture or discontinuity in the planet’s surface, along which movement and displacement takes place. On Earth, they are the result of activity with plate tectonics, the largest of which takes place at the plate boundaries. Energy released by the rapid movement on active faults is what causes most earthquakes in the world today.

The Earth's Tectonic Plates. Credit: msnucleus.org
The Earth’s Tectonic Plates. Credit: msnucleus.org

Since faults do not usually consist of a single, clean fracture, geologists use the term “fault zone” when referring to the area where complex deformation is associated with the fault plane. The two sides of a non-vertical fault are known as the “hanging wall” and “footwall”.

By definition, the hanging wall occurs above the fault and the footwall occurs below the fault. This terminology comes from mining. Basically, when working a tabular ore body, the miner stood with the footwall under his feet and with the hanging wall hanging above him. This terminology has endured for geological engineers and surveyors.

Mechanisms:

The composition of Earth’s tectonic plates means that they cannot glide past each other easily along fault lines, and instead produce incredible amounts of friction. On occasion, the movement stops, causing stress to build up in rocks until it reaches a threshold. At this point, the accumulated stress is released along the fault line in the form of an earthquake.

When it comes to fault lines and the role they have in earthquakes, three important factors come into play. These are known as the “slip”, “heave” and “throw”. Slip refers to the relative movement of geological features present on either side of the fault plane; in other words, the relative motion of the rock on each side of the fault with respect to the other side.

Transform Plate Boundary
Tectonic Plate Boundaries. Credit:

Heave refers to the measurement of the horizontal/vertical separation, while throw is used to measure the horizontal separation. Slip is the most important characteristic, in that it helps geologists to classify faults.

Types of Faults:

There are three categories or fault types. The first is what is known as a “dip-slip fault”, where the relative movement (or slip) is almost vertical. A perfect example of this is the San Andreas fault, which was responsible for the massive 1906 San Francisco Earthquake.

Second, there are “strike-slip faults”, in which case the slip is approximately horizontal. These are generally found in mid-ocean ridges, such as the Mid-Atlantic Ridge – a 16,000 km long submerged mountain chain occupying the center of the Atlantic Ocean.

Lastly, there are oblique-slip faults which are a combination of the previous two, where both vertical and horizontal slips occur. Nearly all faults will have some component of both dip-slip and strike-slip, so defining a fault as oblique requires both dip and strike components to be measurable and significant.

Map of the Earth showing fault lines (blue) and zones of volcanic activity (red). Credit: zmescience.com
Map of the Earth showing fault lines (blue) and zones of volcanic activity (red). Credit: zmescience.com

Impacts of Fault Lines:

For people living in active fault zones, earthquakes are a regular hazard and can play havoc with infrastructure, and can lead to injuries and death. As such, structural engineers must ensure that safeguards are taken when building along fault zones, and factor in the level of fault activity in the region.

This is especially true when building crucial infrastructure, such as pipelines, power plants, damns, hospitals and schools. In coastal regions, engineers must also address whether tectonic activity can lead to tsunami hazards.

For example, in California, new construction is prohibited on or near faults that have been active since the Holocene epoch (the last 11,700 years) or even the Pleistocene epoch (in the past 2.6 million years). Similar safeguards play a role in new construction projects in locations along the Pacific Rim of fire, where many urban centers exist (particularly in Japan).

Various techniques are used to gauge when the last time fault activity took place, such as studying soil and mineral samples, organic and radiocarbon dating.

We have written many articles about the earthquake for Universe Today. Here’s What Causes Earthquakes?, What is an Earthquake?, Plate Boundaries, Famous Earthquakes, and What is the Pacific Ring of Fire?

If you’d like more info on earthquakes, check out the U.S. Geological Survey Website. And here’s a link to NASA’s Earth Observatory.

We’ve also recorded related episodes of Astronomy Cast about Plate Tectonics. Listen here, Episode 142: Plate Tectonics.

Sources: