“What sort of gear did you use to capture that?” folks ask, imagining that I’m using a setup that required a second mortgage to pay for.
People are often surprised at the fact that I’m simply using a converted off-the-shelf webcam modified to fit into the eyepiece-holder of a telescope, along with freeware programs to control the camera, stack,and clean up images. And while there are multi-thousand dollar rigs available commercially that yield images that would have been the envy of professional observatories even a decade ago, you may just find that you have the gear lying around to start doing planetary and lunar photography tonight.
OK, I’ll admit: you do need a laptop and telescope, (things that we typically have “laying around” our house!) but these are the two priciest items on the list to get started. Living the vagabond life of a veteran, a teacher, and a freelance science writer assures that our preferred cameras for conversion are always in the double-digit dollar range.
But converted webcam imaging is not new. We first read about the underground movement over a decade ago. Back in the day, amateur astrophotographers were hacking their Phillips Vesta and ToUcam Pro webcams with stunning results. Celestron, Meade and Orion later caught up to the times and released their own commercial versions for planetary imaging some years later.
A few freeware installations and the modification of a Logitech 3000 that I bought on rebate for 50$ later, and I was soon imaging planets that same night.
Just about any webcam will yield decent results, though the discontinued Phillips ToUcam Pro webcams are still the heavily sought after Holy Grail of webcam astrophotography. The modification simply consists of removing the camera lens (don’t do this with any camera that you don’t want to gut and void the warranty) and attaching a standard 1 ¼” eyepiece barrel in its place using cement glue.
For camera control, I use a program called K3CCDTools. This was freeware once upon a time, now the program costs $50 to install. I still find it well worth using, though I’ve been turned on to some equally useful programs out there that are still free. (more on that in a bit).
K3CCDTools will process your images from start to finish, but I find that Registax is great for post-image processing. Plus, you don’t want to waste valuable scope time processing images: I do the maximum number of video captures in the field, and then tinker with them later on cloudy nights.
Stacking video captures enables you to “grab” those brief moments of fine atmospheric seeing. Many astrophotographers will manually select the best frames from thousands one by one, but I’ll have to admit we’re often impatient and find the selection algorithm on Registax does an acceptable job of selecting the top 10% of images in a flash.
And like Photoshop, a college course could be taught around Registax. Don’t be intimidated, but do feel free to experiment! After stacking and optimizing, we find the true power in making the images “pop” often lies in the final step, known as wavelet processing. A round of sharpening and contrast boosting in Photoshop can also go a long way, just remember that the goal is to apply the minimum to get the job done, rather than looking unnatural and over-processed.
At the eyepiece, the first target hurdle is object acquisition. A standard webcam can go after bright targets such as the Moon, the Sun (with the proper filter) planets, and bright double stars. We’ve even nabbed the International Space Station with our rig using a low-tech but effective tracking method. Your field of view, however, will typically be very narrow; my webcam coupled to a Celestron C8” Schmidt-Cassegrain typically yields a field of view about 10’ on a side. You’ll want to center the object in the eyepiece at the highest power possible, then plop the camera in place.
The next battle is centering and focusing the object on the screen. An out-of-focus planet scatters light: tweaking the focus back and forth sometimes reveals the silvery “doughnut” of the planet lurking just out of view.
From there, you’ll want the object in as razor sharp a focus as possible. K3CCDTools has a great feature for this known as a Fine Focusing Tool (FFT). Some observers also using focusing masks, which can also be easily built — remember, were being cheapskates! — out of cardboard. Be sure those reflector mirrors are properly collimated as well.
Don’t be surprised if the planet initially looks over-saturated. You’ll want to access the manual controls of via the camera software to take the brightness, contrast and color saturation down to acceptable levels. I typically shoot at about 15 frames a second. Fun Fact: the “shutter speed” of the dark adapted “Mark 1 human eyeball” is generally quoted around 1/20th of a second, slower than you’d think!
Note: all those thousands of frames of video go somewhere… be sure to occasionally clean them off your hard-drive, as it will swiftly fill up!
When you image makes a big difference as well. The best time to shoot an object is when it transits the local north-south meridian and is at its highest point above the horizon. The reason for this is that you’re looking through the thinnest possible cross-section of the often turbulent atmosphere.
Universe Today reader Scott Chapman of Montpelier, Virginia also recently shared with us his exploits in planetary webcam imaging and his technique:
“Recently, while looking for an affordable basic telescope, to see if I really had any interest in astronomy, searches and reviews led me to purchase a 70mm refractor. The last thing on my mind was that I could expect to take any pictures of what I might see.
Previously, I had assumed that the only way to take even basic pictures of sky objects was with equipment that was way out of my price range. Imagine my surprise to learn that I could use a simple webcam that I already had sitting around!”
Like many of us mere mortal budget astrophotographers, Scott’s goal was great images at low cost. He also shared with us the programs he uses;
–SharpCap2: For capturing .avi video files from the webcam connected to the telescope.
–AutoStakkert2: Selects and stacks the best frames into a single .tiff file using a simple 3-step process. Scott notes that its “MUCH easier for a beginner to use than Registax!”
-Registax6: The latest version of the software mentioned above.
–JPEGView: For final cropping and file conversion. (I sometimes also use ye ole Paint for this).
Even after a decade of planetary imaging, some of these were new to us as well, a testament to just how far the technique has continued to evolve. Astrophotography and astronomy are lifelong pursuits, and we continue to learn new things every day.
The current camera I’m shooting with is a Logitech c270 that I call my “Wal-Mart 20$ Blue Light Special.” (Yes, I know that’s Kmart!) Lots of discussion forums exist out there as well, including the QuickCam and Unconventional Imaging Astronomy Group (QCUIAG) on Yahoo!
Some observers have even taken to gutting and modifying their webcams entirely, adding in cooling fans, more sensitive chips, longer exposure times and more.
All great topics for a future post. Let us know of your trials and triumphs in webcam planetary photography!
-Watch Dave Dickinson pit his 20$ webcam against multi-thousand dollar rigs weekly in the Virtual Star Party.
-Be sure to send those webcam pics in to Universe Today!
Growing up, my sister played video games and I read books. Now that she has a one-year-old daughter we constantly argue over how her little girl should spend her time. Should she read books in order to increase her vocabulary and stretch her imagination? Or should she play video games in order to strengthen her hand-eye coordination and train her mind to find patterns?
I like to believe that I did so well in school because of my initial unadorned love for books. But I might be about to lose that argument as gamers prove their value in science and more specifically astronomy.
Take a quick look through Zooniverse and you’ll be amazed by the number of Citizen Science projects. You can explore the surface of the moon in Moon Zoo, determine how galaxies form in Galaxy Zoo and search for Earth-like planets in Planet Hunters.
In 2011 two citizen scientists made big news when they discovered two exoplanet candidates — demonstrating that human pattern recognition can easily compliment the powerful computer algorithms created by the Kepler team.
But now we’re introducing yet another Citizen Science project: Disk Detective.
Planets form and grow within dusty circling planes of gas that surround young stars. However, there are many outstanding questions and details within this process that still elude us. The best way to better understand how planets form is to directly image nearby planetary nurseries. But first we have to find them.
“Through Disk Detective, volunteers will help the astronomical community discover new planetary nurseries that will become future targets for NASA’s Hubble Space Telescope and its successor, the James Webb Space Telescope,” said the chief scientist for NASA Goddard’s Sciences and Exploration Directorate, James Garvin, in a press release.
NASA’s Wide-field Infrared Survey Explorer (WISE) scanned the entire sky at infrared wavelengths for a year. It took detailed measurements of more than 745 million objects.
Astronomers have used complex computer algorithms to search this vast amount of data for objects that glow bright in the infrared. But now they’re calling on your help. Not only do planetary nurseries glow in the infrared but so do galaxies, interstellar dust clouds and asteroids.
While there’s likely to be thousands of planetary nurseries glowing bright in the data, we have to separate them from everything else. And the only way to do this is to inspect every single image by eye — a monumental challenge for any astronomer — hence the invention of Disk Detective.
Brief animations allow the user to help classify the object based on relatively simple criteria, such as whether or not the object is round or if there are multiple objects.
“Disk Detective’s simple and engaging interface allows volunteers from all over the world to participate in cutting-edge astronomy research that wouldn’t even be possible without their efforts,” said Laura Whyte, director of Citizen Science at the Adler Planetarium in Chicago, Ill.
The project is hoping to find two types of developing planetary environments, distinguished by their age. The first, known as a young stellar object disk is, well, young. It’s less than 5 million years old and contains large quantities of gas. The second, known as a debris disk, is older than 5 million years. It contains no gas but instead belts of rocky or icy debris similar to our very own asteroid and Kupier belts.
So what are you waiting for? Head to Disk Detective and help astronomers understand how complex worlds form in dusty disks of gas. The book will be there when you get back.
This dissolve animation compares the LRO image (geometrically corrected) of LADEE captured on Jan 14, 2014 with a computer-generated and labeled image of LADEE . LRO and LADEE are both NASA science spacecraft currently in orbit around the Moon. Credit: NASA/Goddard/Arizona State University
Story updated[/caption]
A pair of NASA spacecraft orbiting Earth’s nearest celestial neighbor just experienced a brief ‘Close Encounter of the Lunar Kind’.
Proof of the rare orbital tryst has now been revealed by NASA in the form of spectacular imagery (see above and below) just released showing NASA’s recently arrived Lunar Atmosphere and Dust Environment Explorer (LADEE) lunar orbiter being photographed by a powerful camera aboard NASA’s five year old Lunar Reconnaissance Orbiter (LRO) – as the two orbiters met for a fleeting moment just two weeks ago.
See above a dissolve animation that compares the LRO image (geometrically corrected) of LADEE captured on Jan. 14, 2014 with a computer-generated and labeled LADEE image.
All this was only made possible by a lot of very precise orbital calculations and a spacecraft ballet of sorts that had to be nearly perfectly choreographed and timed – and spot on to accomplish.
Both sister orbiters were speeding along at over 3600 MPH (1,600 meters per second) while traveling perpendicularly to one another!
So the glimpse was short but sweet.
LADEE flies in an equatorial orbit (east-to-west) while LRO travels in a polar orbit (south-to-north). LADEE achieved lunar orbit on Oct. 6, 2013 amidst the federal government shutdown.
Thus their orbits align only infrequently.
The LRO orbiter did a pirouette to precisely point its high resolution narrow angle camera (NAC) while hurtling along in lunar orbit, barely 5.6 miles (9 km) above LADEE.
And it was all over in less than the wink of an eye!
LADEE entered LRO’s Narrow Angle Camera (NAC) field of view for 1.35 milliseconds and a smeared image of LADEE was snapped. LADEE appears in four lines of the LROC image, and is distorted right-to-left.
Both spacecraft are tiny – barely two meters in length.
“Since LROC is a pushbroom imager, it builds up an image one line at a time, thus catching a target as small and fast as LADEE is tricky!” wrote Mark Robinson, LROC principal investigator of Arizona State University.
So the fabulous picture was only possible as a result of close collaboration and extraordinary teamwork between NASA’s LADEE, LRO and LROC camera mission operations teams.
LADEE passed directly beneath the LRO orbit plane a few seconds before LRO crossed the LADEE orbit plane, meaning a straight down LROC image would have just missed LADEE, said NASA.
Therefore, LRO was rolled 34 degrees to the west so the LROC detector (one line) would be precisely oriented to catch LADEE as it passed beneath.
“Despite the blur it is possible to find details of the spacecraft. You can see the engine nozzle, bright solar panel, and perhaps a star tracker camera (especially if you have a correctly oriented schematic diagram of LADEE for comparison),” wrote Robinson in a description.
See the LADEE schematic in the lead image herein.
LADEE was launched Sept. 6, 2013 from NASA Wallops in Virginia on a science mission to investigate the composition and properties of the Moon’s pristine and extremely tenuous atmosphere, or exosphere, and untangle the mysteries of its lofted lunar dust.
Since LADEE is now more than halfway through its roughly 100 day long mission, timing was of the essence before the craft takes a death dive into the moon’s surface.
You can see a full scale model of LADEE at the NASA Wallops visitor center, which offers free admission.
LRO launched Sept. 18, 2009 from Cape Canaveral, Florida to conduct comprehensive investigations of the Moon with seven science instruments and search for potential landing sites for a return by human explorers. It has collected astounding views of the lunar surface, including the manned Apollo landing sites as well as a treasure trove of lunar data.
In addition to NASA’s pair of lunar orbiters, China recently soft landed two probes on the Moon.
So be sure to read my new story detailing how LRO took some stupendous Christmas time 2013 images of China’s maiden lunar lander and rover; Chang’e-3 and Yutu from high above- here.
Stay tuned here for Ken’s continuing LADEE, Chang’e-3, Orion, Orbital Sciences, SpaceX, commercial space, Mars rover and more news.
Call it the eclipse nobody saw. NASA’s Solar Dynamics Observatory (SDO) got its own private solar eclipse showing from its geosynchronous orbital perch today. Twice a year during new phase, the moon glides in front of the sun from the observatory’s perspective. Although we can’t be there in person to see it, the remote view isn’t too shabby. The events are called lunar transits rather than eclipses since they’re seen from outer space. Transits typically last about a half hour, but at 2.5 hours, today’s was one of the longest ever recorded. The next one occurs on July 26, 2014.
Today’s lunar transit of the sun followed by a strong solar flare
When an eclipse ends, the fun is usually over, but not this time. Just as the moon slid off the sun’s fiery disk, a strong M6.6 solar flare exploded from within a new, very active sunspot group rounding the eastern limb and blasted a CME (coronal mass ejection) into space. What a show!
SDO circles Earth in a geosynchronous orbit about 22,000 miles high and photographs the sun continuously day and night from a vantage point high above Mexico and the Pacific Ocean. About 1.5 terabytes of solar data or the equivalent of half a million songs from iTunes are downloaded to antennas in White Sands, New Mexico every day.
For comparison, the space station, which orbits much closer to Earth, would make a poor solar observatory, since Earth blocks the sun for half of every 90 minute orbit.
When you look at the still pictures and video, notice how distinct the edge of the moon appears. With virtually no atmosphere, the moon takes a “sharp” bite out of the sun.
SDO amazes with its spectacular pictures of the sun taken in 10 different wavelengths of light every 10 seconds; additional instruments study vibrations on the sun’s surface, magnetic fields and how much UV radiation the sun pours into space.
Compared to all the hard science, the twice a year transits are a sweet side benefit much like the cherries topping a sundae.
You can make your own movie of today’s partial eclipse by visiting the SDO websiteand following these easy steps:
* Click on the Data tab and select AIA/HMI Browse Data
* Click on the Enter Start Date window, select a start date and time and click Done
* Click on Enter End Date and click Done
* Under Telescopes, pick the color (wavelength) sun you want
* Select View in the display box
* Click Submit at the bottom and watch a video of your selected pictures
We at Universe Today have snow on our minds these days with all this Polar Vortex talk. From out the window, the snowflakes all look the same, but peer at flakes under a microscope and you can see all these different designs pop up. Turns out that our asteroid belt between Mars and Jupiter is also much more diverse than previously believed, all because astronomers took the time to do a detailed survey.
Here’s the interesting thing: the diversity, the team says, implies that Earth-like planets would be hard to find, which could be a blow for astronomers seeking an Earth 2.0 somewhere out in the universe if other research agrees.
To jump back a couple of steps, there’s a debate about how water arose on Earth. One theory is that back billions of years ago when the solar system was settling into its current state — a time when planetesimals were crashing into each other constantly and the larger planets possibly migrated between different orbits — comets and asteroids bearing water crashed into a proto-Earth.
“If true, the stirring provided by migrating planets may have been essential to bringing those asteroids,” the astronomers stated in a press release. “This raises the question of whether an Earth-like exoplanet would also require a rain of asteroids to bring water and make it habitable. If so, then Earth-like worlds might be rarer than we thought.”
To take this example further, the researchers found that the asteroid belt comes from a mix of locations around the solar system. Well, a model the astronomers cite shows that Jupiter once migrated much closer to the sun, basically at the same distance as where Mars is now.
When Jupiter migrated, it disturbed everything in its wake and possibly removed as much as 99.9 per cent of the original asteroid population. And other planet migrations in general threw in rocks from everywhere into the asteroid belt. This means the origin of water in the belt could be more complicated than previously believed.
You can read more details of the survey in the journal Nature. Data was gathered from the Sloan Digital Sky Survey and the research was led by Francesca DeMeo, a Hubble postdoctoral fellow at the Harvard-Smithsonian Center for Astrophysics.
You sure couldn’t hide those grins on television from the Astronaut Candidate Class of 2013 when the call came from the International Space Station.
NASA’s latest recruits were at the Smithsonian National Air and Space Museum in Washington, D.C. at an event today (Thursday) for students. Amid the many youngster questions to Expedition 38 astronauts Mike Hopkins and Rick Mastracchio, astronaut candidate Jessica Meir managed one of her own: was the wait worth it?
Hovering in front of the camera, four-time flyer Mastracchio vigorously shook his hand “no” to laughter from the audience. Hopkins answered her more seriously: “It is definitely worth it. It is the most amazing experience I think you can ever have. Floating is just truly incredible; it just never gets old.”
Minutes later, Hopkins demonstrated a “stupid astronaut trick”: doing Road Runner-style sprinting in place in mid-air. The laughing crew signed off — “So they’re floating off now?” asked event moderator and veteran astronaut Leland Melvin — and the new class had the chance to answer questions of their own.
While the class expressed effusive delight at being astronauts — they were hired last year, so the feeling is quite new to them — Meir said that there was some sadness at leaving the careers they had before. As a recent article in Air&Space Smithsonian pointed out, this class will have several years to wait for a seat into space because there aren’t robust shuttle crews of seven people going up several times a year any more. The Soyuz only carries three people at a time, and there are fewer missions that last for a longer time.
There also is some ambiguity about where the astronauts will go. The International Space Station has been extended until at least 2024, but astronaut candidate Anne McClain added today that an asteroid or Mars are other things being considered for their class. “This class is such an exciting time to be at NASA,” she said.
Other questions asked of the class at the event include who is going to go in space first, and from a wee future astronaut, which planet they’d prefer to go to. You can watch the whole broadcast on the link above.
By now, you will probably have heard that astronomers have produced the first global weather map for a brown dwarf. (If you haven’t, you can find the story here.) May be you’ve even built the cube model or the origami balloon model of the surface of the brown dwarf Luhman 16B the researchers provided (here).
Since one of my hats is that of public information officer at the Max Planck Institute for Astronomy, where most of the map-making took place, I was involved in writing a press release about the result. But one aspect that I found particularly interesting didn’t get much coverage there. It’s that this particular bit of research is a good example of how fast-paced astronomy can be these days, and, more generally, it shows how astronomical research works. So here’s a behind-the-scenes look – a making-of, if you will – for the first brown dwarf surface map (see image on the right).
As in other sciences, if you want to be a successful astronomer, you need to do something new, and go beyond what’s been done before. That, after all, is what publishable new results are all about. Sometimes, such progress is driven by larger telescopes and more sensitive instruments becoming available. Sometimes, it’s about effort and patience, such as surveying a large number of objects and drawing conclusion from the data you’ve won.
Ingenuity plays a significant role. Think of the telescopes, instruments and analytical methods developed by astronomers as the tools in a constantly growing tool box. One way of obtaining new results is to combine these tools in new ways, or to apply them to new objects.
That’s why our opening scene is nothing special in astronomy: It shows Ian Crossfield, a post-doctoral researcher at the Max Planck Institute for Astronomy, and a number of colleagues (including institute director Thomas Henning) in early March 2013, discussing the possibility of applying one particular method of mapping stellar surfaces to a class of objects that had never been mapped in this way before.
The method is called Doppler imaging. It makes use of the fact that light from a rotating star is slightly shifted in frequency as the star rotates. As different parts of the stellar surfaces go by, whisked around by the star’s rotation, the frequency shifts vary slightly different depending on where the light-emitting region is located on the star. From these systematic variations, an approximate map of the stellar surface can be reconstructed, showing darker and brighter areas. Stars are much too distant for even the largest current telescopes to discern surface details, but in this way, a surface map can be reconstructed indirectly.
The method itself isn’t new. The basic concept was invented in the late 1950s, and the 1980s saw several applications to bright, slowly rotating stars, with astronomers using Doppler imaging to map those stars’ spots (dark patches on a stellar surface; the stellar analogue to Sun spots).
Crossfield and his colleagues were wondering: Could this method be applied to a brown dwarf – an intermediary between planet and star, more massive than a planet, but with insufficient mass for nuclear fusion to ignite in the object’s core, turning it into a star? Sadly, some quick calculations, taking into account what current telescopes and instruments can and cannot do as well as the properties of known brown dwarfs, showed that it wouldn’t work.
The available targets were too faint, and Doppler imaging needs lots of light: for one because you need to split the available light into the myriad colors of a spectrum, and also because you need to take many different rather short measurements – after all, you need to monitor how the subtle frequency shifts caused by the Doppler effect change over time.
So far, so ordinary. Most discussions of how to make observations of a completely new type probably come to the conclusion that it cannot be done – or cannot be done yet. But in this case, another driver of astronomical progress made an appearance: The discovery of new objects.
On March 11, Kevin Luhman, an astronomer at Penn State University, announced a momentous discovery: Using data from NASA’s Wide-field Infrared Survey Explorer (WISE), he had identified a system of two brown dwarfs orbiting each other. Remarkably, this system was at a distance of a mere 6.5 light-years from Earth. Only the Alpha Centauri star system and Barnard’s star are closer to Earth than that. In fact, Barnard’s star was the last time an object was discovered to be that close to our Solar system – and that discovery was made in 1916.
Modern astronomers are not known for coming up with snappy names, and the new object, which was designated WISE J104915.57-531906.1, was no exception. To be fair, this is not meant to be a real name; it’s a combination of the discovery instrument WISE with the system’s coordinates in the sky. Later, the alternative designation “Luhman 16AB” for the system was proposed, as this was the 16th binary system discovered by Kevin Luhman, with A and B denoting the binary system’s two components.
These days, the Internet gives the astronomical community immediate access to new discoveries as soon as they are announced. Many, probably most astronomers begin their working day by browsing recent submissions to astro-ph, the astrophysical section of the arXiv, an international repository of scientific papers. With a few exceptions – some journals insist on exclusive publication rights for at least a while –, this is where, in most cases, astronomers will get their first glimpse of their colleagues’ latest research papers.
Luhman posted his paper “Discovery of a Binary Brown Dwarf at 2 Parsecs from the Sun” on astro-ph on March 11. For Crossfield and his colleagues at MPIA, this was a game-changer. Suddenly, here was a brown dwarf for which Doppler imaging could conceivably work, and yield the first ever surface map of a brown dwarf.
However, it would still take the light-gathering power of one of the largest telescopes in the world to make this happen, and observation time on such telescopes is in high demand. Crossfield and his colleagues decided they needed to apply one more test before they would apply. Any object suitable for Doppler imaging will flicker ever so slightly, growing slightly brighter and darker in turn as brighter or darker surface areas rotate into view. Did Luhman 16A or 16B flicker – in astronomer-speak: did one of them, or perhaps both, show high variability?
Astronomy comes with its own time scales. Communication via the Internet is fast. But if you have a new idea, then ordinarily, you can’t just wait for night to fall and point your telescope accordingly. You need to get an observation proposal accepted, and this process takes time – typically between half a year and a year between your proposal and the actual observations. Also, applying is anything but a formality. Large facilities, like the European Southern Observatory’s Very Large Telescopes, or space telescopes like the Hubble, typically receive applications for more than 5 times the amount of observing time that is actually available.
But there’s a short-cut – a way for particularly promising or time-critical observing projects to be completed much faster. It’s known as “Director’s Discretionary Time”, as the observatory director – or a deputy – are entitled to distribute this chunk of observing time at their discretion.
On April 2, Beth Biller, another MPIA post-doc (she is now at the University of Edinburgh), applied for Director’s Discretionary Time on the MPG/ESO 2.2 m telescope at ESO’s La Silla observatory in Chile. The proposal was approved the same day.
Biller’s proposal was to study Luhman 16A and 16B with an instrument called GROND. The instrument had been developed to study the afterglows of powerful, distant explosions known as gamma ray bursts. With ordinary astronomical objects, astronomers can take their time. These objects will not change much over the few hours an astronomer makes observations, first using one filter to capture one range of wavelengths (think “light of one color”), then another filter for another wavelength range. (Astronomical images usually capture one range of wavelengths – one color – at a time. If you look at a color image, it’s usually the result of a series of observations, one color filter at a time.)
Gamma ray bursts and other transient phenomena are different. Their properties can change on a time scale of minutes, leaving no time for consecutive observations. That is why GROND allows for simultaneous observations of seven different colors.
Biller had proposed to use GROND’s unique capability for recording brightness variations for Luhman 16A and 16B in seven different colors simultaneously – a kind of measurement that had never been done before at this scale. The most simultaneous information researchers had gotten from a brown dwarf had been at two different wavelengths (work by Esther Buenzli, then at the University of Arizona’s Steward Observatory, and colleagues). Biller was going for seven. As slightly different wavelength regimes contain information about gas at slightly different colors, such measurements promised insight into the layer structure of these brown dwarfs – with different temperatures corresponding to different atmospheric layers at different heights.
For Crossfield and his colleagues – Biller among them –, such a measurement of brightness variations should also show whether or not one of the brown dwarfs was a good candidate for Doppler imaging.
As it turned out, they didn’t even have to wait that long. A group of astronomers around Michaël Gillon had pointed the small robotic telescope TRAPPIST, designed for detecting exoplanets by the brightness variations they cause when passing between their host star and an observer on Earth, to Luhman 16AB. The same day that Biller had applied for observing time, and her application been approved, the TRAPPIST group published a paper “Fast-evolving weather for the coolest of our two new substellar neighbours”, charting brightness variations for Luhman 16B.
This news caught Crossfield thousands of miles from home. Some astronomical observations do not require astronomers to leave their cozy offices – the proposal is sent to staff astronomers at one of the large telescopes, who make the observations once the conditions are right and send the data back via Internet. But other types of observations do require astronomers to travel to whatever telescope is being used – to Chile, say, to or to Hawaii.
When the brightness variations for Luhman 16B were announced, Crossfield was observing in Hawaii. He and his colleagues realized right away that, given the new results, Luhman 16B had moved from being a possible candidate for the Doppler imaging technique to being a promising one. On the flight from Hawaii back to Frankfurt, Crossfield quickly wrote an urgent observing proposal for Director’s Discretionary Time on CRIRES, a spectrograph installed on one of the 8 meter Very Large Telescopes (VLT) at ESO’s Paranal observatory in Chile, submitting his application on April 5. Five days later, the proposal was accepted.
On May 5, the giant 8 meter mirror of Antu, one of the four Unit Telescopes of the Very Large Telescope, turned towards the Southern constellation Vela (the “Sail of the Ship”). The light it collected was funneled into CRIRES, a high-resolution infrared spectrograph that is cooled down to about -200 degrees Celsius (-330 Fahrenheit) for better sensitivity.
Three and two weeks earlier, respectively, Biller’s observations had yielded rich data about the variability of both the brown dwarfs in the intended seven different wavelength bands.
At this point, no more than two months had passed between the original idea and the observations. But paraphrasing Edison’s famous quip, observational astronomy is 1% observation and 99% evaluation, as the raw data are analyzed, corrected, compared with models and inferences made about the properties of the observed objects.
For Beth Biller’s multi-wavelength monitoring of brightness variations, this took about five months. In early September, Biller and 17 coauthors, Crossfield and numerous other MPIA colleagues among them, submitted their article to the Astrophysical Journal Letters (ApJL) after some revisions, it was accepted on October 17. From October 18 onward, the results were accessible online at astro-ph, and a month later they were published on the ApJL website.
In late September, Crossfield and his colleagues had finished their Doppler imaging analysis of the CRIRES data. Results of such an analysis are never 100% certain, but the astronomers had found the most probable structure of the surface of Luhman 16B: a pattern of brighter and darker spots; clouds made of iron and other minerals drifting on hydrogen gas.
As is usual in the field, the text they submitted to the journal Nature was sent out to a referee – a scientist, who remains anonymous, and who gives recommendations to the journal’s editors whether or not a particular article should be published. Most of the time, even for an article the referee thinks should be published, he or she has some recommendations for improvement. After some revisions, Nature accepted the Crossfield et al. article in late December 2013.
With Nature, you are only allowed to publish the final, revised version on astro-ph or similar servers no less than 6 month after the publication in the journal. So while a number of colleagues will have heard about the brown dwarf map on January 9 at a session at the 223rd Meeting of the American Astronomical Society, in Washington, D.C., for the wider astronomical community, the online publication, on January 29, 2014, will have been the first glimpse of this new result. And you can bet that, seeing the brown dwarf map, a number of them will have started thinking about what else one could do. Stay tuned for the next generation of results.
And there you have it: 10 months of astronomical research, from idea to publication, resulting in the first surface map of a brown dwarf (Crossfield et al.) and the first seven-wavelength-bands-study of brightness variations of two brown dwarfs (Biller et al.). Taken together, the studies provide fascinating image of complex weather patterns on an object somewhere between a planet and a star the beginning of a new era for brown dwarf study, and an important step towards another goal: detailed surface maps of giant gas planets around other stars.
On a more personal note, this was my first ever press release to be picked up by the Weather Channel.
Think the weather is nasty this winter here on Earth? Try vacationing on the brown dwarf Luhman 16B sometime.
Two studies out this week from the Max Planck Institute for Astronomy based at Heidelberg, Germany offer the first look at the atmospheric features of a brown dwarf.
A brown dwarf is a substellar object which bridges the gap between at high mass planet at over 13 Jupiter masses, and a low mass red dwarf star at above 75 Jupiter masses. To date, few brown dwarfs have been directly imaged. For the study, researchers used the recently discovered brown dwarf pair Luhman 16A & B. At about 45(A) and 40(B) Jupiter masses, the pair is 6.5 light years distant and located in the constellation Vela. Only Alpha Centauri and Barnard’s Star are closer to Earth. Luhman A is an L-type brown dwarf, while the B component is a T-type substellar object.
“Previous observations have inferred that brown dwarfs have mottled surfaces, but now we can start to directly map them.” Ian Crossfield of the Max Planck Institute for Astronomy said in this week’s press release. “What we see is presumably patchy cloud cover, somewhat like we see on Jupiter.”
To construct these images, astronomers used an indirect technique known as Doppler imaging. This method takes advantage of the minute shifts observed as the rotating features on brown dwarf approach and recede from the observer. Doppler speeds of features can also hint at the latitudes being observed as well as the body’s inclination or tilt to our line of sight.
But you won’t need a jacket, as researchers gauge the weather on Luhman 16B be in the 1100 degrees Celsius range, with a rain of molten iron in a predominately hydrogen atmosphere.
The study was carried out using the CRyogenic InfraRed Echelle Spectrograph (CRIRES) mounted on the 8-metre Very Large Telescope based at the European Southern Observatory’s (ESO) Paranal observatory complex in Chile. CRIRES obtained the spectra necessary to re-construct the brown dwarf map, while backup brightness measurements were accomplished using the GROND (Gamma-Ray Burst Optical/Near-Infrared Detector) astronomical camera affixed to the 2.2 metre telescope at the ESO La Silla Observatory.
The next phase of observations will involve imaging brown dwarfs using the Spectro-Polarimetric High-contrast Exoplanet Research (SPHERE) instrument, set to go online at the Very Large Telescope facility later this year.
And that may just usher in a new era of directly imaging features on objects beyond our solar system, including exoplanets.
“The exciting bit is that this is just the start. With the next generations of telescopes, and in particular the 39-metre European Large Telescope, we will likely see surface maps of more distant brown dwarfs — and eventually, a surface map for a young giant planet,” said Beth Biller, a researcher previously based at the Max Planck Institute and now based at the University of Edinburgh. Biller’s study of the pair went even more in-depth, analyzing changes in brightness at different wavelengths to peer into the atmospheric structure of the brown dwarfs at varying depths.
“We’ve learned that the weather pattern on these brown dwarfs are quite complex,” Biller said. “The cloud structure of the brown dwarf varies quite strongly as a function of atmospheric depth and cannot be explained with single layer clouds.”
The paper on brown dwarf weather pattern map comes out today in the January 30th, 2014 edition of Nature under the title Mapping Patchy Clouds on a Nearby Brown Dwarf.
The brown dwarf pair targeted in the study was designated Luhman 16A & B after Pennsylvania State University researcher Kevin Luhman, who discovered the pair in mid-March, 2013. Luhman has discovered 16 binary systems to date. The WISE catalog designation for the system has the much more cumbersome and phone number-esque designation of WISE J104915.57-531906.1.
We caught up with the researchers to ask them some specifics on the orientation and rotation of the pair.
“The rotation period of Luhman 16B was previously measured watching the brown dwarf’s globally-averaged brightness changes over many days. Luhman 16A seems to have a uniformly thick layer of clouds, so it exhibits no such variation and we don’t yet know its period,” Crossfield told Universe Today. “We can estimate the inclination of the rotation axis because we know the rotation period, we know how big brown dwarfs are, and in our study, we measured the “projected” rotational velocity. From this, we know we must be seeing the brown dwarf near equator-on.”
The maps constructed correspond with an amazingly fast rotation period of just under 6 hours for Luhman 16B. For context, the planet Jupiter – one of the fastest rotators in our solar system – spins once every 9.9 hours.
“The rotational period of Luhman 16B is known from 12 nights of variability monitoring,” Biller told Universe Today. “The variability in the B component is consistent with the results from 2013, but the A component has a lower amplitude of variability and a somewhat different rotational period of maybe 3-4 hours, but that is still a very tentative result.”
This first mapping of the cloud patterns on a brown dwarf is a landmark, and promises to provide a much better understanding of this transitional class of objects.
Couple this announcement with the recent nearby brown dwarf captured in a direct image, and its apparent that a new era of exoplanet science is upon us, one where we’ll not only be able to confirm the existence of distant worlds and substellar objects, but characterize what they’re actually like.
A bunch of people really, really want to go to the Red Planet on the proposed one-way Mars One trip; more than 1,000 applicants are being considered in Round 2 selections. They will face, however, more radiation during their journey that could put them at higher risk of cancers down the road. While the solution could be to add more shielding to a spacecraft, that’s both heavy and expensive.
Enter the alternative: a magnetic field. A group calling itself the EU Project Space Radiation Superconductive Shield says their technology will “solve the issue of radiation protection in three years” and is seeking academic collaborations to make that happen. Here’s how it will work:
“The SR2S superconducting shield will provide an intense magnetic field, 3,000 times stronger than the Earth’s magnetic field and will be confined around the space craft,” a press release states.
“The magnetic fields will extend to about 10 metres in diameter and ionizing particles will be deflected away. Only the most energetic particles will penetrate the superconducting shield, but these will contribute the least to the absorbed radiation dose as their flux is negligible. This will address the issue of suitability of people for space travel as it will open up eligibility for space travel regardless of gender.”
That last bit refers to some radiation guidelines highlighted a few months ago. Peggy Whitson, a veteran NASA astronaut, said publicly that women fly far fewer hours in space than men. That’s because space authorities apply lower “lifetime” radiation limits to females (for biological reasons, which you can read more about here).
The project team includes participation from the Italian National Institute of Nuclear Physics, General Company For Space (CGS SpA), Columbus Superconductor SpA, Thales Alenia Space – Italia S.p.A., the French Commission of Atomic Energy and Alternative Energies, and the European Organization for Nuclear Research (CERN).
“We have already made significant progress since the beginning of the project and believe we will succeed in this goal of solving the radiation protection issue,” stated Roberto Battiston, who leads the project and is also a professor of experimental physics at the University of Trento in Italy. The project started a year ago.
“In the last few months, the international teams working at CERN have solved two major technical issues relevant to the superconducting magnets in space (i) how to make very long high temperature superconducting cables join together in a shorter segment without losing the superconducting properties and (ii) how to ensure protection of long high temperature cables from a quench.”
More information on the project is available at its website. What do you think of their idea? Leave your thoughts in the comments.
When we think of gravity, we typically think of it as a force between masses. When you step on a scale, for example, the number on the scale represents the pull of the Earth’s gravity on your mass, giving you weight. It is easy to imagine the gravitational force of the Sun holding the planets in their orbits, or the gravitational pull of a black hole. Forces are easy to understand as pushes and pulls.
But we now understand that gravity as a force is only part of a more complex phenomenon described the theory of general relativity. While general relativity is an elegant theory, it’s a radical departure from the idea of gravity as a force. As Carl Sagan once said, “Extraordinary claims require extraordinary evidence,” and Einstein’s theory is a very extraordinary claim. But it turns out there are several extraordinary experiments that confirm the curvature of space and time.
The key to general relativity lies in the fact that everything in a gravitational field falls at the same rate. Stand on the Moon and drop a hammer and a feather, and they will hit the surface at the same time. The same is true for any object regardless of its mass or physical makeup, and this is known as the equivalence principle.
Since everything falls in the same way regardless of its mass, it means that without some external point of reference, a free-floating observer far from gravitational sources and a free-falling observer in the gravitational field of a massive body each have the same experience. For example, astronauts in the space station look as if they are floating without gravity. Actually, the gravitational pull of the Earth on the space station is nearly as strong as it is at the surface. The difference is that the space station (and everything in it) is falling. The space station is in orbit, which means it is literally falling around the Earth.
This equivalence between floating and falling is what Einstein used to develop his theory. In general relativity, gravity is not a force between masses. Instead gravity is an effect of the warping of space and time in the presence of mass. Without a force acting upon it, an object will move in a straight line. If you draw a line on a sheet of paper, and then twist or bend the paper, the line will no longer appear straight. In the same way, the straight path of an object is bent when space and time is bent. This explains why all objects fall at the same rate. The gravity warps spacetime in a particular way, so the straight paths of all objects are bent in the same way near the Earth.
So what kind of experiment could possibly prove that gravity is warped spacetime? One stems from the fact that light can be deflected by a nearby mass. It is often argued that since light has no mass, it shouldn’t be deflected by the gravitational force of a body. This isn’t quite correct. Since light has energy, and by special relativity mass and energy are equivalent, Newton’s gravitational theory predicts that light would be deflected slightly by a nearby mass. The difference is that general relativity predicts it will be deflected twice as much.
The effect was first observed by Arthur Eddington in 1919. Eddington traveled to the island of Principe off the coast of West Africa to photograph a total eclipse. He had taken photos of the same region of the sky sometime earlier. By comparing the eclipse photos and the earlier photos of the same sky, Eddington was able to show the apparent position of stars shifted when the Sun was near. The amount of deflection agreed with Einstein, and not Newton. Since then we’ve seen a similar effect where the light of distant quasars and galaxies are deflected by closer masses. It is often referred to as gravitational lensing, and it has been used to measure the masses of galaxies, and even see the effects of dark matter.
Another piece of evidence is known as the time-delay experiment. The mass of the Sun warps space near it, therefore light passing near the Sun is doesn’t travel in a perfectly straight line. Instead it travels along a slightly curved path that is a bit longer. This means light from a planet on the other side of the solar system from Earth reaches us a tiny bit later than we would otherwise expect. The first measurement of this time delay was in the late 1960s by Irwin Shapiro. Radio signals were bounced off Venus from Earth when the two planets were almost on opposite sides of the sun. The measured delay of the signals’ round trip was about 200 microseconds, just as predicted by general relativity. This effect is now known as the Shapiro time delay, and it means the average speed of light (as determined by the travel time) is slightly slower than the (always constant) instantaneous speed of light.
A third effect is gravitational waves. If stars warp space around them, then the motion of stars in a binary system should create ripples in spacetime, similar to the way swirling your finger in water can create ripples on the water’s surface. As the gravity waves radiate away from the stars, they take away some of the energy from the binary system. This means that the two stars gradually move closer together, an effect known as inspiralling. As the two stars inspiral, their orbital period gets shorter because their orbits are getting smaller.
For regular binary stars this effect is so small that we can’t observe it. However in 1974 two astronomers (Hulse and Taylor) discovered an interesting pulsar. Pulsars are rapidly rotating neutron stars that happen to radiate radio pulses in our direction. The pulse rate of pulsars are typically very, very regular. Hulse and Taylor noticed that this particular pulsar’s rate would speed up slightly then slow down slightly at a regular rate. They showed that this variation was due to the motion of the pulsar as it orbited a star. They were able to determine the orbital motion of the pulsar very precisely, calculating its orbital period to within a fraction of a second. As they observed their pulsar over the years, they noticed its orbital period was gradually getting shorter. The pulsar is inspiralling due to the radiation of gravity waves, just as predicted.
Finally there is an effect known as frame dragging. We have seen this effect near Earth itself. Because the Earth is rotating, it not only curves spacetime by its mass, it twists spacetime around it due to its rotation. This twisting of spacetime is known as frame dragging. The effect is not very big near the Earth, but it can be measured through the Lense-Thirring effect. Basically you put a spherical gyroscope in orbit, and see if its axis of rotation changes. If there is no frame dragging, then the orientation of the gyroscope shouldn’t change. If there is frame dragging, then the spiral twist of space and time will cause the gyroscope to precess, and its orientation will slowly change over time.
We’ve actually done this experiment with a satellite known as Gravity Probe B, and you can see the results in the figure here. As you can see, they agree very well.
Each of these experiments show that gravity is not simply a force between masses. Gravity is instead an effect of space and time. Gravity is built into the very shape of the universe.
Think on that the next time you step onto a scale.