Virtual Observatory Discovers New Cataclysmic Variable

Simulation of Intermediate Polar CV star
Simulation of Intermediate Polar CV star (Dr Andy Beardmore, Keele University)

[/caption]

In my article two weeks ago, I discussed how data mining large surveys through online observatories would lead to new discoveries. Sure enough, a pair of astronomers, Ivan Zolotukhin and Igor Chilingarian using data from the Virtual Observatory, has announced the discovery of a cataclysmic variable (CV).


Cataclysmic variables are often called “novae”. However, they’re not a single star. These stars are actually binary systems in which their interactions cause large increases in brightness as matter is accreted from a secondary (usually post main-sequence) star, onto a white dwarf. The accretion of matter piles up on the surface until the it reaches a critical density and undergoes a brief but intense phase of fusion increasing the brightness of the star considerably. Unlike type Ia supernovae, this explosion doesn’t meet the critical density required to cause a core collapse.

The team began by considering a list of 107 objects from the Galactic Plane Survey conducted by the Advanced Satellite for Cosmology and Astrophysics (ASCA, a Japanese satellite operating in the x-ray regime). These objects were exceptional x-ray emitters that had not yet been classified. While other astronomers have done targeted investigations of individual objects requiring new telescope time, this team attempted to determine whether any of the odd objects were CVs using readily available data from the Virtual Observatory.

Since the objects were all strong x-ray sources, they all met at least one criteria of being a CV. Another was that CV stars often are strong emitters for Hα since the eruptions often eject hot hydrogen gas. To analyze whether or not any of the objects were emitters in this regime, the astronomers cross referenced the list of objects with data from the Isaac Newton Telescope Photometric Hα Survey of the northern Galactic plane (IPHAS) using a color-color diagram. In the field of view of the IPHAS survey that overlapped with the region from the ASCA image for one of the objects, the team found an object that emitted strongly in the Hα. But in such a dense field and with such different wavelength regimes, it was difficult to identify the objects as the same one.

To assist in determining if the two interesting objects were indeed the same, or whether they just happened to lie nearby, the pair turned to data from Chandra. Since Chandra has much smaller uncertainty in the positioning (0.6 arcsecs), the pair was able to identify the object and determine that the interesting object from IPHAS was indeed the same one from the ASCA survey.

Thus, the object passed the two tests the team had devised for finding cataclysmic variables. At this point, followup observation was warranted. The astronomers used the 3.5-m Calar Alto telescope to conduct spectroscopic observations and confirmed that the star was indeed a CV. In particular, it looked to be a subclass in which the primary white dwarf star had a strong enough magnetic field to disrupt the accretion disk and the point of contact is actually over the poles of the star (this is known as a intermediate polar CV).

This discovery is an example of how discoveries are just waiting to happen with data that’s already available and sitting in archives, waiting to be explored. Much of this data is even available to the public and can be mined by anyone with the proper computer programs and know-how. Undoubtedly, as organization of these storehouses of data becomes organized in more user friendly manners, additional discoveries will be made in such a manner.

The Tug of Exoplanets on Exoplanets

Earlier this year, I wrote about how an apparent change in the orbital characteristics of a planet around TrES-2b may be indicative of a new planet, much in the same way perturbations of Uranus revealed the presence of Neptune. A follow up study was conducted by astronomers at the University of Arizona and another study on planet WASP-3b also enters the fray.

The new study by the University of Arizona team, observed the TrES-2b planet on June 15, 2009, just seven orbits after the observations reported by Mislis et al. that reported the change in orbit. The findings of Mislis et al. were that, not only was the onset of the transit offset, but the angle of inclination was slowly changing. Yet the Arizona team found their results matched the previous data sets and found no indication of either of these effects (within error) when compared to the timing predictions from other, previous studies.

Additionally, an unrelated study led by Ronald Gilliland of the Space Telescope Science Institute discussing various sampling modes of the Kepler telescope used the TrES-2b system as an example and had coincidentally preceded and overlapped on of the observations made by Mislis et al. This study too found no variation in orbital characteristics of the planet.

Another test they applied to determine if the orbit was changing was the depth of the eclipse. Mislis’ team predicted that the trend would slowly cause the plane of the orbit to change such that, eventually, the planet would no longer eclipse the star. But before that happened, there should be a period of time where the area blocked by the planet was covering less and less of the star. If that were to happen, the amount of light blocked would decrease as well until it vanished all together. The Arizona team compared the depth of the eclipses they observed with the earlier observations and found that they observed no change here either.

So what went wrong with the data from Mislis et al.? One possibility is that they did not properly account for differences in their filter when compared with that of the original observations by which the transit timing was determined. Stars have a feature known as limb darkening in which the edges appear darker due to the angle at which light is being released. Some light is scattered in the atmosphere of the star and since the scattering is wavelength dependent, so too is the effects of the limb darkening. If a photometric filter is observing in a slightly different part of the spectrum, it would read the effects differently.

While these findings have discredited the notion that there are perturbations in the TrES-2b system, the notion that we can find exoplanets by their effects on known ones is still an attractive one that other astronomers are considering. One team, lead by G. Maciejewski has launched an international observing campaign to discover new planets by just this method. The campaign uses a series of telescopes ranging from 0.6 – 2.2 meters located around the world to frequently monitor stars with known transiting planets. And this study may have just had its first success.

In a paper recently uploaded to arXiv, the team announced that variations in the timing of transits for planet WASP-3b indicate the presence of a 15 Earth mass planet in a 2:1 orbital resonance with the known one. Currently, the team is working to make followup observations of their own including radial velocity measurements with the Hobby-Eberly Telescope owned by the University of Texas, Austin. With any luck, this new method will begin to discover new planets.

UPDATE: It looks like Maciejewski’s team has announced another potential planet through timing variations. This time around WASP-10.

Stolen: Magellanic Clouds – Return to Andromeda

The Magellanic Clouds are an oddity. Their relative velocity is suspiciously close to the escape velocity of the Milky Way system making it somewhat difficult for them to have been formed as part of the system. Additionally, their direction of motion is nearly perpendicular to the disk of the galaxy and systems, especially ones as large as the Magellanic Clouds, should show more orientation to the plane if they formed along side. Their gas content is also notably different than other satellite galaxies of our galaxy. The combination of these features suggests to some, that the Magellanic Clouds aren’t native to the Milky Way and were instead intercepted.

But where did they come from? Although the suggestion is not entirely new, a recent paper, accepted to the Astrophysical Journal Letters, suggests they may have been captured after a past merger in the Andromeda Galaxy (M31).

To analyze this proposition, the researchers, Yang (from the Chinese Academy of Sciences) and Hammers (of the University of Paris, Diderot), conducted simulations backtracking the positions of the Magellanic Clouds. While this may sound straightforward, the process is anything but. Since galaxies are extended objects, their three dimensional shapes and mass profiles must be worked out extremely well to truly account for the path of motion. Additionally, the Andromeda galaxy is certainly moving and would have been in a different position that it is observed today. But exactly where was it when the Magellanic Clouds would have been expelled? This is an important question, but not easy to answer given that observing the proper motions of objects so far away is difficult.

But wait. There’s more! As always, there’s a significant amount of the mass that can’t be seen at all! The presence and distribution of dark matter would greatly have affected the trajectory of the expelled galaxies. Fortunately, our own galaxy seems to be in a fairly quiescent phase and other studies have suggested that dark matter halos would be mostly spherical unless perturbed. Furthermore, distant galaxy clusters such as the Virgo supercluster as well as the “Great Attractor” would have also played into the trajectories.

These uncertainties take what would be a fairly simple problem and turn it into a case in which the researchers were instead forced to explore the parameter space with a range of reasonable inputs to see which values worked. In doing so, the pair of astronomers concluded “it could be the case, within a reasonable range of parameters for both the Milky Way and M31.” If so, the clouds spent 4 – 8 billion years flying across intergalactic space before being caught by our own galaxy.

But could there be further evidence to support this? The authors note that if Andromeda underwent a merger event of such scale would likely have induced vast amounts of star formation. As such, we should expect to see an increase in numbers of stars with this age. The authors do not make any statements as to whether or not this is the case. Regardless, the hypothesis is interesting and reminds us how dynamic our universe can be.

The Habitability of Gliese 581d

The Gliese 581 system has been making headlines recently for the most newly announced planet that may lie in the habitable zone. Hopes were somewhat dashed when we were reminded that the certainty level of its discovery was only 3 sigma (95%, whereas most astronomical discoveries are at or above the 99% confidence level before major announcements), but the Gliese 581 system may yet have more surprises. When the second planet, Gliese 581d, was first discovered, it was placed outside of the expected habitable zone. But in 2009, reanalysis of the data refined the orbital parameters and moved the planet in, just to the edge of the habitable zone. Several authors have suggested that, with sufficient greenhouse gasses, this may push Gliese 581d into the habitable zone. A new paper to be published in an upcoming issue of Astronomy & Astrophysics simulates a wide range of conditions to explore just what characteristics would be required.

The team, led by Robin Wordsworth at the University of Paris, varied properties of the planet including surface gravity, albedo, and the composition of potential atmospheres. Additionally, the simulations were also run for a planet in a similar orbit around the sun (Gliese 581 is an M dwarf) to understand how the different distribution of energy could effect the atmosphere. The team discovered that, for atmospheres comprised primarily of CO2, the redder stars would warm the planet more than a solar type star due to the CO2 not being able to scatter the redder light as well, thus allowing more to reach the ground.

One of the potential roadblocks to warming the team considered was the formation of clouds. The team first considered CO2 clouds which would be likely towards the outer edges of the habitable zone and form on Mars. Since clouds tend to be reflective, they would counteract warming effects from incoming starlight and cool the planet. Again, due to the nature of the star, the redder light would mitigate this somewhat allowing more to penetrate a potential cloud deck.

Should some H2O be present its effects are mixed. While clouds and ice are both very reflective, which would decrease the amount of energy captured by a planet, water also absorbs well in the infrared region. As such, clouds of water vapor can trap heat radiating from the surface back into space, trapping it and resulting in an overall increase. The problem is getting clouds to form in the first place.

The inclusion of nitrogen gas (common in the atmospheres of planets in the solar system) had little effect on the simulations. The primary reason was the lack of absorption of redder light. In general, the inclusion only slightly changed the specific heat of the atmosphere and a broadening of the absorption lines of other gasses, allowing for a very minor ability to trap more heat. Given the team was looking for conservative estimates, they ultimately discounted nitrogen from their final considerations.

With the combination of all these considerations, the team found that even given the most unfavorable conditions of most variables, should the atmospheric pressure be sufficiently high, this would allow for the presence of liquid water on the surface of the planet, a key requirement for what scientists maintain is critical for abiogenesis. The favorable merging of characteristics other than pressure were also able to produce liquid water with pressures as low as 5 bars. The team also notes that other greenhouse gasses, such as methane, were excluded due to their rarity, but should the exist, the ability for liquid water would be improved further.

Ultimately, the simulation was only done as a one dimensional model which essentially considered a thin column of the atmosphere on the day side of the planet. The team suggests that, for a better understanding, three dimensional models would need to be created. In the future, they plan to use just such modeling which would allow for a better understanding of what was happening elsewhere on the planet. For example, should temperatures fall too quickly on the night side, this could lead to the condensation of the gasses necessary and put the atmosphere in an unstable state. Additionally, as we discover more transiting exoplanets and determine their atmospheric properties from transmission spectra, astronomers will better be able to constrain what typical atmospheres really look like.

Probing Exoplanets

Sometimes topics segue perfectly. With the recent buzz about habitable planets, followed by the raining on the parade articles we’ve had about the not insignificant errors in the detections of planets around Gliese 581 as well as finding molecules in exoplanet atmospheres, it’s not been the best of times for finding life. But in a comment on my last article, Lawrence Crowell noted: “You can’t really know for sure whether a planet has life until you actually go there and look on the ground. This is not at all easy, and probably it is at best possible to send a probe within a 25 to 50 light year radius.”

This is right on the mark and happens to be another topic that’s been under some discussion on arXiv recently in a short series of paper and responses. The first paper, accepted to the journal Astrobiology and led by Jean Schneider of the Observatory of Paris-Meudon, seeks to describe “the far future of exoplanet direct characterization”. In general, this paper discusses where the study of exoplanets could go from our current knowledge base. It proposes two main directions: Finding more planets to better survey the parameter space planets inhabit, or more in depth, long-term studying of the planets we do know.

But perhaps the more interesting aspect of the paper, and the one that’s generated a rare response, is what can be done should we detect a planet with promising characteristics relatively nearby. They first propose trying to directly image the planet’s surface and calculate the diameter of a telescope capable of doing so would be roughly half as large as the sun. Instead, if we truly wish to get a direct image, the best bet would be to go there. They quickly address a few of the potential challenges.

The first is that of cosmic rays. These high energy particles can wreak havoc on electronics. The second is simple dust grains. The team calculates that an impact with “a 100 micron interstellar grain at 0.3 the speed of light has the same kinetic energy than a 100 ton body at 100 km/hour”. With present technology, any spacecraft equipped with sufficient shielding would be prohibitively massive and difficult to accelerate to the velocities necessary to make the trip worthwhile.

But Ian Crawford, of the University of London, thinks that the risk posed by such grains may be overstated. Firstly, Crawford believes Schneider’s requirement of 30% of the speed of light is somewhat overzealous. Instead, most proposals of interstellar travel by probes generally use a value of 10% of the speed of light. In particular, the most exhaustive proposal yet created, (the Daedalus project) only attempted to achieve a velocity of 0.12c. However, the ability to produce such a craft was well beyond the means at the time. But with the advent of miniaturization of many electronic components, the prospect may need to be reevaluated.

Aside from the overestimate on necessary velocities, Crawford suggests that Schneider’s team overstated the size of dust grains. In the solar neighborhood, dust grains are estimated to be nearly 100 times smaller than reported by Schneider’s team. The combination of the change in size estimation and that of velocity takes the energy released on collision from a whopping 4 x 107 Joules, to a mere 4.5 Joules. At absolute largest, recent studies have shown that the upper limit for dust particles is more in the range of 4.5 micrometers.

Lastly, Crawford suggests that there may be alternative ways to offer shielding than the brute force wall of mass. If a spacecraft were able to detect incoming particles using radar or another technique, it is possible that it could destroy the incoming particles using lasers, or deflect it using a electromagnetic field.

But Schneider wasn’t finished. He issued a response to Crawford’s response. In it, he criticizes Crawford’s optimistic vision of using nuclear or anti-matter propulsion systems. He notes that, thus far, nuclear propulsion has only been able to produce short impulses instead of continuous thrust and that, although some electronics have been miniaturized, the best analogue yet developed, the National Ignition Facility, is, “with all its control and cooling systems, is presently quite a non-miniaturized building.”

Anti-matter propulsion may be even more difficult. Currently, our ability to produce anti-matter is severely limited. Schneider estimates that it would take 200 terrawatts of energy to produce the required amounts. Meanwhile, the overall energy of the entire Earth is only 20 terrawatts.

In response to the charge of overestimation, Schneider notes that, although such large dust grains would be rare, but “even two lethal or severe collisions are prohibitory”, but does not go on to make any honest estimations of what the actual probability of such a collision would be.

Ultimately, Schneider concludes that all discussion is, at best, extremely preliminary. Before any such undertaking would be seriously considered, it would require “a precursor mission to secure the technological concept, including shielding mechanisms, at say 500 to 1000 Astronomical Units.” Ultimately, Schneider and his team seems to remind us that the technology is not yet there and that there are legitimate threats we must address. Crawford, on the other hand suggests that some of these challenges are ones that we may already be well on the road to addressing and constraining.

Missing Molecules in Exoplanet Atmospheres

Artist's View of Extrasolar Planet HD 189733b

[/caption]

Every day, I wake up and flip through the titles and abstracts of recent articles posted to arXiv. With increasing regularity, papers pop up announcing the discovery of a new extra-solar planet. At this point, I keep scrolling. How many more hot Jupiters do you really want to hear about? If it’s a record setter in some way, I’ll read it. Another way I’ll pay attention is if there’s reports of detections of spectroscopic detection of components of the atmosphere. While a fistful of transiting planets have had spectral lines discovered, they’re still pretty rare and new discoveries will help constrain our understanding of how planets form.

The holy grail in this field would be to discover elemental signatures of molecules that don’t form naturally and are characteristic of life (as we know it). In 2008, a paper announced the first detection of CO2 in an exoplanet atmosphere (that of HD 189733b), which, although not exclusively, is one of the tracer molecules for life. While HD 189733b isn’t a candidate for searches for ET, it was still a notable first.

Then again, perhaps not. A new study casts doubt on the discovery as well as the report of various molecules in the atmospheres of another exoplanet.

Thus far there have been two methods by which astronomers have attempted to identify molecular species in the atmosphere of exoplanets. The first is by using starlight, filtered by the planet’s atmosphere to search for spectral lines that are only present during transit. The difficulty with this method is that, spreading the light out to detect the spectra weakens the signal, sometimes down to the very point that it’s lost in systematic noise from the telescope itself. The alternative is to use photometric observations, which look at the change in light in different color ranges, to characterize the molecules. Since the ranges are all lumped together, this can improve the signal, but this is a relatively new technique and statistical methodology for this technique is still shaky. Additionally, since only one filter can be used at a time, the observations must generally be taken on different transits, which allow the characteristics of the star to change due to star spots.

The 2008 study by Swain et al. that announced the presence of CO2 used the first of these methods. Their trouble started the following year when a followup study by Sing et al. failed to reproduce the results. In their paper, Sing’s team stated,”Either the planet’s transmission spectrum is variable, or residual systematic errors still plague the edges of the Swain et al. spectrum.”

The new study, by Gibson, Pont, and Aigrain (working from the Universities of Oxford and Exeter) suggests that the claims of Swain’s team were a result of the latter. They suggest that the signal is swamped with more noise than Swain et al. accounted for. This noise comes from the telescope itself (in this case Hubble since these observations would need to be made out of Earth’s atmosphere which would add its own spectral signature). Specifically, they report that since there’s changes in the state of the detector itself that are often hard to identify and correct for, Swain’s team underestimated the error, leading to a false positive. Gibson’s team was able to reproduce the results using Swain’s method, but when they applied a more complete method which didn’t assume that the detector could be calibrated so easily by using observations of the star outside the transit and on different Hubble orbits, the estimation of the errors increased significantly, swamping the signal Swain claimed to have observed.

Gibson’s team also reviewed the case of detections of molecules in the atmosphere of an extra solar planet around XO-1 (on which Tinetti et al. reported to have found methane, water, and CO2). In both cases, they again find that detections of were overstated and the ability to tease signal from the data was dependent on questionable methods.

This week seems to be a bad week for those hoping to find life on extra-solar planets. With this article casting doubt on our ability to detect molecules in distant atmospheres and the recent caution on the detection of Gliese 581g, one might worry about our ability to explore these new frontiers, but what this really underscores is the need to refine our techniques and keep taking deeper looks. This has been a frank reassessment of the current state of knowledge, but does not in any way claim to limit our future discoveries. Additionally, this is how science works; scientists review each others data and conclusions. So, looking on the bright side, science works, even if it’s not exactly telling us what we’d like to hear.

Astronomy: The Next Generation

Future Tense
Future Tense

In some respects, the field of astronomy has been a rapidly changing one. New advances in technology have allowed for exploration of new spectral regimes, new methods of image acquisition, new methods of simulation, and more. But in other respects, we’re still doing the same thing we were 100 years ago. We take images, look to see how they’ve changed. We break light into its different colors, looking for emission and absorption. The fact that we can do it faster and to further distances has revolutionized our understanding, but not the basal methodology.

But recently, the field has begun to change. The days of the lone astronomer at the eyepiece are already gone. Data is being taken faster than it can be processed, stored in easily accessible ways, and massive international teams of astronomers work together. At the recent International Astronomers Meeting in Rio de Janeiro, astronomer Ray Norris of Australia’s Commonwealth Scientific and Industrial Research Organization (CSIRO) discussed these changes, how far they can go, what we might learn, and what we might lose.

Observatories
One of the ways astronomers have long changed the field is by collecting more light, allowing them to peer deeper into space. This has required telescopes with greater light gathering power and subsequently, larger diameters. These larger telescopes also offer the benefit of improved resolution so the benefits are clear. As such, telescopes in the planning stages have names indicative of immense sizes. The ESO’s “Over Whelmingly Large Telescope” (OWL), the “Extremely Large Array” (ELA), and “Square Kilometer Array” (SKA) are all massive telescopes costing billions of dollars and involving resources from numerous nations.

But as sizes soar, so too does the cost. Already, observatories are straining budgets, especially in the wake of a global recession. Norris states, “To build even bigger telescopes in twenty years time will cost a significant fraction of a nation’s wealth, and it is unlikely that any nation, or group of nations, will set a sufficiently high priority on astronomy to fund such an instrument. So astronomy may be reaching the maximum size of telescope that can reasonably be built.”

Thus, instead of the fixation on light gathering power and resolution, Norris suggests that astronomers will need to explore new areas of potential discovery. Historically, major discoveries have been made in this manner. The discovery of Gamma-Ray Bursts occurred when our observational regime was expanded into the high energy range. However, the spectral range is pretty well covered currently, but other domains still have a large potential for exploration. For instance, as CCDs were developed, the exposure time for images were shortened and new classes of variable stars were discovered. Even shorter duration exposures have created the field of asteroseismology. With advances in detector technology, this lower boundary could be pushed even further. On the other end, the stockpiling of images over long times can allow astronomers to explore the history of single objects in greater detail than ever before.

Data Access
In recent years, many of these changes have been pushed forward by large survey programs like the 2 Micron All Sky Survey (2MASS) and the All Sky Automated Survey (ASAS) (just to name two of the numerous large scale surveys). With these large stores of pre-collected data, astronomers are able to access astronomical data in a new way. Instead of proposing telescope time and then hoping their project is approved, astronomers are having increased and unfettered access to data. Norris proposes that, should this trend continue, the next generation of astronomers may do vast amounts of work without even directly visiting an observatory or planning an observing run. Instead, data will be culled from sources like the Virtual Observatory.

Of course, there will still be a need for deeper and more specialized data. In this respect, physical observatories will still see use. Already, much of the data taken from even targeted observing runs is making it into the astronomical public domain. While the teams that design projects still get first pass on data, many observatories release the data for free use after an allotted time. In many cases, this has led to another team picking up the data and discovering something the original team had missed. As Norris puts it, “much astronomical discovery occurs after the data are released to other groups, who are able to add value to the data by combining it with data, models, or ideas which may not have been accessible to the instrument designers.”

As such, Nelson recommends encouraging astronomers to contribute data to this way. Often a research career is built on numbers of publications. However, this runs the risk of punishing those that spend large amounts of time on a single project which only produces a small amount of publication. Instead, Nelson suggests a system by which astronomers would also earn recognition by the amount of data they’ve helped release into the community as this also increases the collective knowledge.

Data Processing
Since there is a clear trend towards automated data taking, it is quite natural that much of the initial data processing can be as well. Before images are suitable for astronomical research, the images must be cleaned for noise and calibrated. Many techniques require further processing that is often tedious. I myself have experienced this as much of a ten week summer internship I attended, involved the repetitive task of fitting profiles to the point-spread function of stars for dozens of images, and then manually rejecting stars that were flawed in some way (such as being too near the edge of the frame and partially chopped off).

While this is often a valuable experience that teaches budding astronomers the reasoning behind processes, it can certainly be expedited by automated routines. Indeed, many techniques astronomers use for these tasks are ones they learned early in their careers and may well be out of date. As such, automated processing routines could be programmed to employ the current best practices to allow for the best possible data.

But this method is not without its own perils. In such an instance, new discoveries may be passed up. Significantly unusual results may be interpreted by an algorithm as a flaw in the instrumentation or a gamma ray strike and rejected instead of identified as a novel event that warrants further consideration. Additionally, image processing techniques can still contain artifacts from the techniques themselves. Should astronomers not be at least somewhat familiar with the techniques and their pitfalls, they may interpret artificial results as a discovery.

Data Mining
With the vast increase in data being generated, astronomers will need new tools to explore it. Already, there has been efforts to tag data with appropriate identifiers with programs like Galaxy Zoo. Once such data is processed and sorted, astronomers will quickly be able to compare objects of interest at their computers whereas previously observing runs would be planned. As Norris explains, “The expertise that now goes into planning an observation will instead be devoted to planning a foray into the databases.” During my undergraduate coursework (ending 2008, so still recent), astronomy majors were only required to take a single course in computer programming. If Norris’ predictions are correct, the courses students like me took in observational techniques (which still contained some work involving film photography), will likely be replaced with more programming as well as database administration.

Once organized, astronomers will be able to quickly compare populations of objects on scales never before seen. Additionally, by easily accessing observations from multiple wavelength regimes they will be able to get a more comprehensive understanding of objects. Currently, astronomers tend to concentrate in one or two ranges of spectra. But with access to so much more data, this will force astronomers to diversify further or work collaboratively.

Conclusions
With all the potential for advancement, Norris concludes that we may be entering a new Golden Age of astronomy. Discoveries will come faster than ever since data is so readily available. He speculates that PhD candidates will be doing cutting edge research shortly after beginning their programs. I question why advanced undergraduates and informed laymen wouldn’t as well.

Yet for all the possibilities, the easy access to data will attract the crackpots too. Already, incompetent frauds swarm journals looking for quotes to mine. How much worse will it be when they can point to the source material and their bizarre analysis to justify their nonsense? To combat this, astronomers (as all scientists) will need to improve their public outreach programs and prepare the public for the discoveries to come.

Poor in one, Rich in another

Tycho's Supernova Remnant. Credit: Spitzer, Chandra and Calar Alto Telescopes.

[/caption]

Just over three years ago, I wrote a blog post commemorating the 50th anniversary of one of the most notable papers in the history of astronomy. In this paper, Burbidge, Burbidge, Fowler, and Hoyle laid out the groundwork for our understanding of how the universe builds up heavy elements.

The short version of the story is that there are two main processes identified: The slow (s) process and the rapid (r) process. The s-process is the one we often think about in which atoms are slowly bombarded with protons and neutrons, building up their atomic mass. But as the paper pointed out, this often happens too slowly to pass roadblocks to this process posed by unstable isotopes which don’t last long enough to catch another one before falling back down to lower atomic number. In this case, the r-process is needed in which the flux of nucleons is much higher in order to overcome the barrier.

The combination of these two processes has done remarkably well in matching observations of what we see in the universe at large. But astronomers can never rest easily. The universe always has its oddities. One example is stars with very odd relative amounts of the elements built up by these processes. Since the s-process is far more common, they’re what we should see primarily, but in some stars, such as SDSS J2357-0052, there exists an exceptionally high concentration of the rare r-process elements. A recent paper explores this elemental enigma.

As the designation implies, SDSS J2357-0052’s uniqueness was discovered by the Sloan Digital Sky Survey (SDSS). The survey uses several filters to image fields of stars at different wavelengths. Some of the filters are chosen to lie in wavelength ranges in which there are well known absorption lines for elements known to be tracers of overall metallicity. This photometric system allowed an international team of astronomers, led by Wako Aoki of the National Astronomical Observatory in Tokyo, to get a quick and dirty view of the metal content of the stars and choose interesting ones for followup study.

These followup observations were done with high resolution spectroscopy and showed that the star had less than one one-thousandth the amount of iron that the Sun does ([Fe/H] = -3.4), placing it among the most metal poor stars ever discovered. However, iron is the end of the elements produced by the s-process. When going beyond that atomic number, the relative abundances drop off very quickly. While the drop off in SDSS J2357-0052 was still steep, it wasn’t near as dramatic as in most other stars. This star had a dramatic enhancement of the r-process elements.

Yet this wasn’t exceptional in and of itself. Several metal poor stars have been discovered with such r-process enhancements. But none coupled with such an extreme deficiency of iron. The implication of this combination is that this star was very close to a supernova. The authors suggest two scenarios that can explain the observations. In the first, the supernova occurred before the star formed, and SDSS J2357-0052 was formed in the immediate vicinity before the enhanced material would be able to disperse and mix into the interstellar medium. The second is that SDSS J2357-0052 was an already formed star in a binary orbit with a star that became a supernova. If the latter case is true, it would likely give the smaller star a large “kick” as the mass holding the system would change dramatically. Although no exceptional radial velocity was detected for SDSS J2357-0052, the motion (if it exists) could be in the plane of the sky requiring proper motion studies to either confirm or refute this possibility.

The authors also note that the first star with somewhat similar characteristics (although not as extreme), was discovered first in the outer halo where the likelihood of the necessary supernova occurring is low. As such, it is more likely that that star was ejected in such a process establishing some credibility for the scenario in general, even if not the case for SDSS J2357-0052.

Hawking(ish) Radiation Observed

In 1974, Steven Hawking proposed a seemingly ridiculous hypothesis. Black holes, the gravitational monsters from which nothing escapes, evaporate. To justify this, he proposed that pairs of virtual particles in which one strayed too close to the event horizon, could be split, causing one particle to escape and become an actual particle that could escape. This carrying off of mass would take energy and mass away from the black hole and deplete it. Due to the difficulty of observing astronomical black holes, this emission has gone undetected. But recently, a team of Italian physicists, led by Francesco Belgiorno, claims to have observed Hawking radiation in the lab. Well, sort of. It depends on your definition.

The experiment worked by sending powerful laser pulses through a block of ultra-pure glass. The intensity of the laser would change the optical properties of the glass increasing the refractive index to the point that light could not pass. In essence, this created an artificial event horizon. But instead of being a black hole which particles could pass but never return, this created a “white hole” in which particles could never pass in the first place. If a virtual pair were created near this barrier, one member could be trapped on one side while the other member could escape and be detected creating a situation analogous to that predicted by Hawking radiation.

Readers with some background in quantum physics may be scratching their heads at this point. The experiment uses a barrier to impede the photons, but quantum tunneling has demonstrated that there’s no such thing as a perfect barrier. Some photons should tunnel through. To avoid detecting these photons, the team simply moved the detector. While some photons will undoubtedly tunnel through, they would continue on the same path with which they were set. The detector was moved 90Âş to avoid detecting such photons.

The change in position also helped to minimize other sources of false detections such as scattering. At 90Âş, scattering only occurs for vertically polarized light and the experiment used horizontally polarized light. As a check to make sure none of the light became mispolarized, the team checked to ensure no photons of the emitted wavelength were observed. The team also had to guard against false detections from absorption and re-emission from the molecules in the glass (fluorescence). This was achieved through experimentation to gain an understanding of how much of this to expect so the effects could be subtracted out. Additionally, the group chose a wavelength in which fluorescence was minimized.

After all the removal of sources of error for which the team could account, they still reported a strong signal which they interpreted as coming from separated virtual particles and call a detection of Hawking radiation. Other scientists disagree in the definition. While they do not question the interpretation, others note that Hawking radiation, by definition, only occurs at gravitational event horizons. While this detection is interesting, it does not help to shed light on the more interesting effects that come with such gravitational event horizons such as quantum gravity or the paradox provided by the Trans-Planckian problem. In other words, while this may help to establish that virtual particles like this exist, it says nothing of whether or not they could truly escape from near a black hole, which is a requirement for “true” Hawking radiation.

Meanwhile, other teams continue to explore similar effects with other artificial barriers and event horizons to explore the effects of these virtual particles. Similar effects have been reported in other such systems including ones with water waves to form the barrier.

M31’s Odd Rotation Curve

Early on in astronomical history, galactic rotation curves were expected to be simple; they should operate much like the solar system in which inner objects orbit faster and outer objects slower. To the surprise of many astronomers, when rotation curves were eventually worked out, they appeared mostly flat. The conclusion was that the mass we see was only a small fraction of the total mass and that a mysterious Dark Matter must be holding the galaxies together, forcing them to rotate more like a solid body.

Recent observations of the Andromeda Galaxy’s (M31) rotation curve has shown that there may yet be more to learn. In the outermost edges of the galaxy, the rotation rate has been shown to increase. And M31 isn’t alone. According to Noordermeer et al. (2007) “in some cases, such as UGC 2953, UGC 3993 or UGC 11670 there are indications that the rotation curves start to rise again at the outer edges of the HI discs.” A new paper by a team of Spanish astronomers attempts to explain this oddity.

Although many spiral galaxies have been discovered with the odd rising rotational velocities near their outer edges, Andromeda is both one of the most prominent and the closest. Detailed studies from Corbelli et al. (2010) and Chemin et al. (2009), mapped out the rise in HI gas, showing that the velocity increases some 50 km/s in the outer 7 kiloparsecs mapped. This makes up a significant fraction of the total radius given the studies extended to only ~38 kiloparsecs. While conventional models with Dark Matter are able to reproduce the rotational velocities of the inner portions of the galaxy, they have not explained this outer feature and instead predict that it should slowly fall off.

The new study, led by B. Ruiz-Granados and J.A. Rubino-Martin from the Instituto de Astrofisica de Canarias, attempts to explain this oddity using a force with which astronomers are very familiar: Magnetic fields. This force has been shown to decrease less rapidly than others over galactic distances and in particular, studies of M31’s magnetic field shows that it slowly changes angle with distance from the center of the galaxy. This slowly changing angle works in such a manner as to decrease the angle between the field and the direction of motion of particles within it. As a result, “the field becomes more tightly wound with increasing galactocentric distance” making the decrease in strength even slower.

Although galactic magnetic fields are weak by most standards, the sheer amount of matter they can affect and the charged nature of many gas clouds means that even weak fields may play an important role. M31’s magnetic field has been estimated to be ~4.6 microGauss. When a magnetic field with this value is added into the modeling equations, the team found that it greatly improved the fit of models to the observed rotation curve, matching the increase in rotational velocity.

The team notes that this finding is still speculative as the understanding of the magnetic fields at such distances is based solely on modeling. Although the magnetic field has been explored for the inner portions of the galaxy (roughly the inner 15 kiloparsecs), no direct measurement has yet been made in the regions in question. However, this model makes strict observational predictions which could be confirmed by future missions LOFAR and SKA.