One night 400 years ago, Galileo pointed his 2 inch telescope at Jupiter and spotted 3 of its moons. On subsequent nights, he spotted another, and saw one of the moons disappear behind Jupiter. With those simple observations, he propelled human understanding onto a path it still travels.
Galileo’s observations set off a revolution in astronomy. Prior to his observations of Jupiter’s moons, the prevailing belief was that the entire Universe rotated around the Earth, which lay at the center of everything. That’s a delightfully childish viewpoint, in retrospect, but it was dogma at the time.
Until Galileo’s telescope, this Earth-centric viewpoint, called Aristotelian cosmology, made sense. To all appearances, we were at the center of the action. Which just goes to show you how wrong we can be.
But once it became clear that Jupiter had other bodies orbiting it, our cherished position at the center of the Universe was doomed.
Galileo’s observations were an enormous challenge to our understanding of ourselves at the time, and to the authorities at the time. He was forced to recant what he had seen, and he was put under house arrest. But he never really backed down from the observations he made with his 2 inch telescope. How could he?
Now, of course, there isn’t so much hostility towards people with telescopes. As time went on, larger and more powerful telescopes were built, and we’ve gotten used to our understanding going through tumultuous changes. We expect it, even anticipate it.
In our current times, Super Telescopes rule the day, and their sizes are measured in meters, not inches. And when new observations challenge our understanding of things, we cluster around out of curiosity, and try to work our way through it. We don’t condemn the results and order scientists to keep quiet.
The first of the Super Telescopes, as far as most of us are concerned, is the Hubble Space Telescope. From its perch in Low Earth Orbit (LEO), the Hubble has changed our understanding of the Universe on numerous fronts. With its cameras, and the steady stream of mesmerizing images those cameras deliver, a whole generation of people have been exposed to the beauty and mystery of the cosmos.
Hubble has gazed at everything, from our close companion the Moon, all the way to galaxies billions of light years away. It’s spotted a comet breaking apart and crashing into Jupiter, dust storms on Mars, and regions of energetic star-birth in other galaxies. But Hubble’s time may be coming to an end soon, and other Super Telescopes are on the way.
Nowadays, Super Telescopes are expensive megaprojects, often involving several nations. They’re built to pursue specific lines of inquiry, such as:
What is the nature of Dark Matter and Dark Energy? How are they distributed in the Universe and what role do they play?
Are there other planets like Earth, and solar systems like ours? Are there other habitable worlds?
Are we alone or is there other life somewhere?
How do planets, solar systems, and galaxies form and evolve?
Some of the Super Telescopes will be on Earth, some will be in space. Some have enormous mirrors made up of individual, computer-controlled segments. The Thirty Meter Telescope has almost 500 of these segments, while the European Extremely Large Telescope has almost 800 of them. Following a different design, the Giant Magellan Telescope has only seven segments, but each one is over 8 meters in diameter, and each one weighs in at a whopping 20 tons of glass each.
Some of the Super Telescopes see in UV or Infrared, while others can see in visible light. Some see in several spectrums. The most futuristic of them all, the Large Ultra-Violet, Optical, and Infrared Surveyor (LUVOIR), will be a massive space telescope situated a million-and-a-half kilometers away, with a 16 meter segmented mirror that dwarfs that of the Hubble, at a mere 2.4 meters.
Some of the Super Telescopes will discern the finest distant details, while another, the Large Synoptic Survey Telescope, will complete a ten-year survey of the entire available sky, repeatedly imaging the same area of sky over and over. The result will be a living, dynamic map of the sky showing change over time. That living map will be available to anyone with a computer and an internet connection.
We’re in for exciting times when it comes to our understanding of the cosmos. We’ll be able to watch planets forming around young stars, glimpse the earliest ages of the Universe, and peer into the atmospheres of distant exoplanets looking for signs of life. We may even finally crack the code of Dark Matter and Dark Energy, and understand their role in the Universe.
Along the way there will be surprises, of course. There always are, and it’s the unanticipated discoveries and observations that fuel our sense of intellectual adventure.
The Super Telescopes are technological masterpieces. They couldn’t be built without the level of technology we have now, and in fact, the development of Super Telescopes help drives our technology forward.
But they all have their roots in Galileo and his simple act of observing with a 2-inch telescope. That, and the curiosity about nature that inspired him.
We humans have an insatiable hunger to understand the Universe. As Carl Sagan said, “Understanding is Ecstasy.” But to understand the Universe, we need better and better ways to observe it. And that means one thing: big, huge, enormous telescopes.
In this series we’ll look at the world’s upcoming Super Telescopes:
The Large UV Optical Infrared Surveyor Telescope (LUVOIR)
There’s a whole generation of people who grew up with images from the Hubble Space Telescope. Not just in magazines, but on the internet, and on YouTube. But within another generation or two, the Hubble itself will seem quaint, and watershed events of our times, like the Moon Landing, will be just black and white relics of an impossibly distant time. The next generations will be fed a steady diet of images and discoveries stemming from the Super Telescopes. And the LUVOIR will be front and centre among those ‘scopes.
If you haven’t yet heard of LUVOIR, it’s understandable; LUVOIR is in the early stages of being defined and designed. But LUVOIR represents the next generation of space telescopes, and its power will dwarf that of its predecessor, the Hubble.
LUVOIR (its temporary name) will be a space telescope, and it will do its work at the LaGrange 2 point, the same place that JWST will be. L2 is a natural location for space telescopes. At the heart of LUVOIR will be a 15m segmented primary mirror, much larger than the Hubble’s mirror, which is a mere 2.4m in diameter. In fact, LUVOIR will be so large that the Hubble could drive right through the hole in the center of it.
While the James Webb Space Telescope will be in operation much sooner than LUVOIR, and will also do amazing work, it will observe primarily in the infrared. LUVOIR, as its name makes clear, will have a wider range of observation more like Hubble’s. It will see in the Ultra-Violet spectrum, the Optical spectrum, and the Infrared spectrum.
Recently, Brad Peterson spoke with Fraser Cain on a weekly Space Hangout, where he outlined the plans for the LUVOIR. Brad is a recently retired Professor of Astronomy at the Ohio State University, where served as chair of the Astronomy Department for 9 years. He is currently the chair of the Science Committee at NASA’s Advisory Council. Peterson is also a Distinguished Visiting Astronomer at the Space Telescope Science Institute, and the chair of the astronomy section of the American Association for the Advancement of Science.
Different designs for LUVOIR have been discussed, but as Peterson points out in the interview above, the plan seems to have settled on a 15m segmented mirror. A 15m mirror is larger than any optical light telescope we have on Earth, though the Thirty Meter Telescope and others will soon be larger.
“Segmented telescopes are the technology of today when it comes to ground-based telescopes. The JWST has taken that technology into space, and the LUVOIR will take segmented design one step further,” Peterson said. But the segmented design of LUVOIR differs from the JWST in several ways.
“…the LUVOIR will take segmented design one step further.” – Brad Peterson
JWST’s mirrors are made of beryllium and coated with gold. LUVOIR doesn’t require the same exotic design. But it has other requirements that will push the envelope of segmented telescope design. LUVOIR will have a huge array of CCD sensors that will require an enormous amount of electrical power to operate.
LUVOIR will not be cryogenically cooled like the JWST is, because it’s not primarily an Infrared observatory. LUVOIR will also be designed to be serviceable. In fact, the US Congress now requires all space telescopes to be serviceable.
“Congress has mandated that all future large space telescopes must be serviceable if practicable.” – Brad Peterson
LUVOIR is designed to have a long life. It’s multiple instruments will be replaceable, and the hope is that it will last in space for 50 years. Whether it will be serviced by robots, or by astronauts, has not been determined. It may even be designed so that it could be brought back from L2 for servicing.
LUVOIR will contribute to the search for life on other worlds. A key requirement for LUVOIR is that it do spectroscopy on the atmospheres of distant planets. If you can do spectroscopy, then you can determine habitability, and, potentially, even if a planet is inhabited. This is the first main technological challenge for LUVOIR. This spectroscopy requires a powerful coronagraph to suppress the light of the stars that exoplanets orbit. LUVOIR’s coronagraph will excel at this, with a ratio of starlight suppression of 10 billion to 1. With this capability, LUVOIR should be able to do spectroscopy on the atmospheres of small, terrestrial exoplanets, rather than just larger gas giants.
“This telescope is going to be remarkable. The key science that it’s going to do be able to do is spectroscopy of planets in the habitable zone around nearby stars.” – Brad Peterson
This video from NASA’s Goddard Space Flight Center talks about the search for life, and how telescopes like LUVOIR will contribute to the search. At the 15:00 mark, Dr. Aki Roberge talks about how spectroscopy is key to finding signs of life on exoplanets, and how LUVOIR will take that search one step further.
Using spectroscopy to search for signs of life on exoplanets is just one of LUVOIR’s science goals.
LUVOIR is tasked with other challenges as well, including:
Mapping the distribution of dark matter in the Universe.
Isolating the source of gravitational waves.
Imaging circumstellar disks to see how planets form.
Identifying the first starlight in the Universe, studying early galaxies and finding the first black holes.
Studying surface features of worlds in our Solar System.
To tackle all these challenges, LUVOIR will have to clear other technological hurdles. One of them is the requirement for long exposure times. This puts enormous constraints on the stability of the scope, since its mirror is so large. A system of active supports for the mirror segments will help with stability. This is a trait it shares with other terrestrial Super Telescopes like the Thirty Meter Telescope and the European Extremely Large Telescope. Each of those had hundreds of segments which have to be controlled precisely with computers.
LUVOIR’s construction, and how it will be placed in orbit are also significant considerations.
According to Peterson, LUVOIR could be launched on either of the heavy lift rockets being developed. The Falcon Heavy is being considered, as is the Space Launch System. The SLS Block 1B could do it, depending on the final size of LUVOIR.
“I’s going to require a heavy lift vehicle.” – Brad Peterson
Or, LUVOIR may never be launched into space. It could be assembled in space with pre-built components that are launched one at a time, just like the International Space Station. There are several advantages to that.
With assembly in space, the telescope doesn’t have to be built to withstand the tremendous force it takes to launch something into orbit. It also allows for testing when completed, before being sent to L2. Once the ‘scope was assembled and tested, a small ion propulsion engine could be used to power it to L2.
It’s possible that the infrastructure to construct LUVOIR in space will exist in a decade or two. NASA’s Deep Space Gateway in cis-lunar space is planned for the mid-20s. It would act as a staging point for deep-space missions, and for missions to the lunar surface.
LUVOIR is still in the early stages. The people behind it are designing it to meet as many of the science goals as they can, all within the technological constraints of our time. Planning has to start somewhere, and the plans presented by Brad Peterson represent the current thinking behind LUVOIR. But there’s still a lot of work to do.
“Typical time scale from selection to launch of a flagship mission is something like 20 years.” – Brad Peterson
As Peterson explains, LUVOIR will have to be chosen as NASA’s highest priority during the 2020 Decadal Survey. Once that occurs, then a couple more years are required to really flesh out the design of the mission. According to Peterson, “Typical time scale from selection to launch of a flagship mission is something like 20 years.” That gets us to a potential launch in the mid-2030s.
Along the way, LUVOIR will be given a more suitable name. James Webb, Hubble, Kepler and others have all had important missions named after them. Perhaps its Carl Sagan’s turn.
“The Carl Sagan Space Telescope” has a nice ring to it, doesn’t it?
The Small Magellanic Cloud (SMC) is one of the Milky Way’s nearest companions (along with the Large Magellanic Cloud.) It’s visible with the naked eye in the southern hemisphere. A new image from the European Southern Observatory’s (ESO) Visible and Infrared Survey Telescope for Astronomy (VISTA) has peered through the clouds that obscure it and given us our biggest image ever of the dwarf galaxy.
The SMC contains several hundred million stars, is about 7,000 light years in diameter, and is about 200,000 light years away. It’s one of the most distant objects that we can see with the naked eye, and can only be seen from the southern hemisphere (and the lowest latitudes of the northern hemisphere.)
The SMC is a great target for studying how stars form because it’s so close to Earth, relatively speaking. But the problem is, its detail is obscured by clouds of interstellar gas and dust. So an optical survey of the Cloud is difficult.
But the ESO’s VISTA instrument is ideal for the task. VISTA is a near-infrared telescope, and infrared light is not blocked by the dust. VISTA was built at the ESO’s Paranal Observatory, in the Atacama Desert in Chile where it enjoys fantastic observing conditions. VISTA was designed to perform several surveys, including the Vista Magellanic Survey.
Explore the Zoomable image of the Small Magellanic Cloud. (You won’t be disappointed.)
The VISTA Magellanic Survey is focused on 3 main objectives:
The study of stellar populations in the Magellanic Clouds
The history of star formation in the Magellanic Clouds
The three-dimensional structure of the Magellanic Clouds
An international team led by Stefano Rubele of the University of Padova has studied this image, and their work has produced some surprising results. VISTA has shown us that most of the stars in this image are much younger than stars in other neighbouring galaxies. It’s also shown us that the SMC’s morphology is that of a warped disc. These are only early results, and there’s much more work to be done analyzing the VISTA image.
As the authors say in their paper, the SMC is a great target for study because of its “rich population of star clusters, associations, stellar pulsators, primary distance indicators, and stars in shortlived evolutionary stages.” In a way, we’re fortunate to have the SMC so close. But studying the SMC was difficult, until the VISTA came online with its infrared capabilities.
VISTA saw first light on December 11th, 2009. It’s time is devoted to systematic surveys of the sky. In its first five years, it has undertaken large surveys of the entire southern sky, and also studied small patches of the sky to discern extremely faint objects. The leading image in this article is from the Vista Magellanic Survey, a survey covering 184 square degrees of the sky, taking in both the Small Magellanic Cloud and the Large Magellanic Cloud, and their environment.
In order to make sense of our Universe, astronomers have to work hard, and they have to push observing technology to the limit. Some of that hard work revolves around what are called sub-millimeter galaxies (SMGs.) SMGs are galaxies that can only be observed in the submillimeter range of the electromagnetic spectrum.
The sub-millimeter range is the waveband between the far-infrared and microwave wavebands. (It’s also called Terahertz radiation.) We’ve only had the capability to observe in the sub-millimeter range for a couple decades. We’ve also increased the angular resolution of telescopes, which helps us discern separate objects.
SMGs themselves are dim in other wavelengths, because they’re obscured by dust. The optical light is blocked by the dust, and absorbed and re-emitted in the sub-millimeter range. In the sub-millimeter, SMGs are highly luminous; trillions of times more luminous than the Sun, in fact.
This is because they are extremely active star-forming regions. SMGs are forming stars at a rate hundreds of times greater than the Milky Way. They are also generally older, more distant galaxies, so they’re red-shifted. Studying them helps us understand galaxy and star formation in the early universe.
A new study, led by James Simpson of the University of Edinburgh and Durham University, has examined 52 of these galaxies. In the past, it was difficult to know the exact location of SMGs. In this study, the team relied on the power of the Atacama Large Millimeter/submillimeter array (ALMA) to get a much more precise measurement of their location. These 52 galaxies were first identified by the Submillimeter Common-User Bolometer Array (SCUBA-2) in the UKIDSS Ultra Deep Survey.
There are four major results of the study:
48 of the SMGs are non-lensed, meaning that there is no object of sufficient mass between us and them to distort their light. Of these, the team was able to constrain the red-shift (z) for 35 of them to a median range of z-2.65. When it comes to extra-galactic observations like this, the higher the red-shift, the further away the object is. (For comparison, the highest red-shift object we know of is a galaxy called GN-z11, at z=11.1, which corresponds to about 400 million years after the Big Bang.
Another type of galaxy, the Ultra-Luminous Infrared Galaxy (ULIRG) were thought to be evolved versions of SMGs. But this study showed that SMGs are larger and cooler than ULIRGs, which means that any evolutionary link between the two is unlikely.
The team calculated estimates of dust mass in these galaxies. Their estimates suggest that effectively all of the optical-to-near-infrared light from co-located stars is obscured by dust. They conclude that a common method in astronomy used to characterize astronomical light sources, called Spectral Energy Distribution (SED), may not be reliable when it comes to SMGs.
The fourth result is related to the evolution of galaxies. According to their analysis, it seems unlikely that SMGs can evolve into spiral or lenticular galaxies (a lenticular galaxy is midway between a spiral and an elliptical galaxy.) Rather, it appears that SMGs are the progenitors of elliptical galaxies.
This study was a pilot study that the team hopes to extend to many other SMGs in the future.
On March 30, 2017, SpaceX performed a pretty routine rocket launch. The payload was a communications satellite called SES-10, owned by a company in Luxembourg. And if all goes well, the satellite will eventually make its way to a high orbit of 35,000 km (22,000 miles) and deliver broadcasting and television services to Latin America.
For all intents and purposes, this is an absolutely normal, routine, and maybe even boring event in the space industry. Another chemical rocket blasted off another communications satellite to join the thousands of satellites that have come before.
Of course, as you probably know, this wasn’t a routine launch. It was the first step in one of the most important achievements in space flight – launch reusability. This was the second time the 14-story Falcon 9 rocket had lifted off and pushed a payload into orbit. Not Falcon 9s in general, but this specific rocket was reused.
In a previous life, this booster blasted off on April 8, 2016 carrying CRS-8, SpaceX’s 8th resupply mission to the International Space Station. The rocket launched from Florida’s Cape Canaveral, released its payload, re-entered the atmosphere and returned to a floating robotic barge in the Atlantic Ocean called Of Course I Still Love You. That’s a reference to an amazing series of books by Iain M. Banks.
Why is this such an amazing accomplishment? What does the future hold for reusability? And who else is working on this?
Developing a rocket that could be reused has been one of the holy grails of the space industry, and yet, many considered it an engineering accomplishment that could never be achieved. Trust me, people have tried in the past.
Portions of the space shuttle were reused – the orbiter and the solid rocket boosters. And a few decades ago, NASA tried to develop the X-33 as a single stage reusable rocket, but ultimately canceled the program.
To reuse a rocket makes total sense. It’s not like you throw out your car when you return from a road trip. You don’t destroy your transatlantic airliner when you arrive in Europe. You check it out, do a little maintenance, refuel it, fill it with passengers and then fly it again.
According to SpaceX founder Elon Musk, a brand new Falcon 9 first stage costs about $30 million. If you could perform maintenance, and then refill it with fuel, you’d bring down subsequent launches to a few hundred thousand dollars.
SpaceX is still working out what a “flight-tested” launch will cost on a reused Falcon 9 will cost, but it should turn into a significant discount on SpaceX’s already aggressive prices. If other launch providers think they’re getting undercut today, just wait until SpaceX really gets cranking with these reused rockets.
For most kinds of equipment, you want them to have been re-used many times. Cars need to be taken to the test track, airplanes are flown on many flights before passengers ever climb inside. SpaceX will have an opportunity to test out each rocket many times, figuring out where they fail, and then re-engineering those components. This makes for more durable and safer launch hardware, which I suspect is the actual goal here – safety, not cost.
In addition to the first stage, SpaceX also re-used the satellite fairing. This is the covering that makes the payload more aerodynamic while the rocket moves through the lower atmosphere. The fairing is usually ejected and burns up on re-entry, but SpaceX has figured out how to recover that too, saving a few more million.
SpaceX’s goals are even more ambitious. In addition to the first stage booster and launch fairing, SpaceX is looking to reuse the second stage booster. This is a much more complicated challenge, because the second stage is going much faster and needs to lose a lot more velocity. In late 2014, they put their plans on hold for a second stage reuse.
SpaceX’s next big milestone will be to decrease the reuse time. From almost a year to under 24 hours.
Sometime this year, SpaceX is expected to do the first launch of the Falcon Heavy. A launch system that looks like it’s made up of 3 Falcon-9 rockets bolted together. Since that’s basically what it is.
The center booster is a reinforced Falcon-9, with two additional Falcon-9s as strap-on boosters. Once the Falcon Heavy lifts off, the three boosters will detach and will individually land back on Earth, ready for reassembly and reuse. This system will be capable of carrying 54,000 kilograms into low Earth orbit. In addition, SpaceX is hoping to take the technology one more step and have the upper stage return to Earth.
Imagine it. Three boosters and upper stage and payload fairing all returning to Earth and getting reused.
And waiting in the wings, of course, is SpaceX’s huge Interplanetary Transport System, announced by Elon Musk in September of 2016. The super-heavy lift vehicle will be capable of carrying 300,000 kilograms into low Earth orbit.
For comparison, the Apollo era Saturn V could carry 140,000 kg into low Earth orbit, so this thing will be much much bigger. But unlike the Saturn V, it’ll be capable of returning to Earth, and landing on its launch pad, ready for reuse.
SpaceX just crossed a milestone, but they’re not the only player in this field.
Perhaps the biggest competitor to SpaceX comes from another internet entrepreneur: Amazon’s Jeff Bezos, the 2nd richest man in the world after Bill Gates. Bezos founded his own rocket company, Blue Origin in Seattle, which had been working in relative obscurity for the last decade. But in the last few years, they demonstrated their technology for reusable rocket flight, and laid out their plans for competing with SpaceX.
In April 2015, Blue Origin launched their New Shepard rocket on a suborbital trajectory. It went up to an altitude of about 100 km, and then came back down and landed on its launch pad again. It made a second flight in November 2015, a third flight in April 2016, and a fourth flight in June 2016.
That does sound exciting, but keep in mind that reaching 100 km in altitude requires vastly less energy than what the Spacex Falcon 9 requires. Suborbital and orbital are two totally milestones. The New Shepard will be used to carry paying tourists to the edge of space, where they can float around weightlessly in the vomit of the other passengers.
But Blue Origin isn’t done. In September 2016, they announced their plans for the follow-on New Glenn rocket. And this will compete head to head with SpaceX. Scheduled to launch by 2020, like, within 3 years or so, the New Glenn will be an absolute monster, capable of carrying 45,000 kilograms of cargo into low Earth orbit. This will be comparable to SpaceX’s Falcon Heavy or NASA’s Space Launch System.
Like the Falcon 9, the New Glenn will return to its launch pad, ready for a planned reuse of 100 flights.
A decade ago, the established United Launch Alliance – a consortium of Boeing and Lockheed-Martin – was firmly in the camp of disposable launch systems, but even they’re coming around to the competition from SpaceX. In 2014, they began an alliance with Blue Origin to develop the Vulcan rocket.
The Vulcan will be more of a traditional rocket, but some of its engines will detach in mid-flight, re-enter the Earth’s atmosphere, deploy parachutes and be recaptured by helicopters as they’re returning to the Earth. Since the engines are the most expensive part of the rocket, this will provide some cost savings.
There’s another level of reusability that’s still in the realm of science fiction: single stage to orbit. That’s where a rocket blasts off, flies to space, returns to Earth, refuels and does it all over again. There are some companies working on this, but it’ll be the topic for another episode.
Now that SpaceX has successfully launched a first stage booster for the second time, this is going to become the new normal. The rocket companies are going to be fine tuning their designs, focusing on efficiency, reliability, and turnaround time.
These changes will bring down the costs of launching payloads to orbit. That’ll mean it’s possible to launch satellites that were too expensive in the past. New scientific platforms, communications systems, and even human flights become more reasonable and commonplace.
Of course, we still need to take everything with a grain of salt. Most of what I talked about is still under development. That said, SpaceX just reused a rocket. They took a rocket that already launched a satellite, and used it to launch another satellite.
It’s a pretty exciting time, and I can’t wait to see what happens next.
Now you know how I feel about this accomplishment, I’d like to hear your thoughts. Do you think we’re at the edge of a whole new era in space exploration, or is this more of the same? Let me know your thoughts in the comments.
It sometimes doesn’t take much to tear a family apart. A Christmas dinner gone wrong can do that. But for a family of stars to be torn apart, something really huge has to happen.
The dramatic break-up of a family of stars played itself out in the Orion Nebula, about 600 years ago. The Orion Nebula is one of the most studied objects in our galaxy. It’s an active star forming region, where much of the star birth is concealed behind clouds of dust. Advances in infrared and radio astronomy have allowed us to peer into the Nebula, and to watch a stellar drama unfolding.
Over the last few decades, observations showed the two of the stars in our young family travelling off in different directions. In fact, they were travelling in opposite directions, and moving at very high speeds. Much higher than stars normally travel at. What caused it?
Astronomers were able to piece the story together by re-tracing the positions of both stars back 540 years. All those centuries ago, around the same time that it was dawning on humanity that Earth revolved around the Sun instead of the other way around, both of the speeding stars were in the same location. This suggested that the two were part of a star system that had broken up for some reason. But their combined energy didn’t add up.
Now, the Hubble has provided another clue to the whole story, by spotting a third runaway star. They traced the third star’s path back 540 years and found that it originated in the same location as the others. That location? An area near the center of the Orion Nebula called the Kleinmann-Low Nebula.
The team behind these new results, led by Kevin Luhman of Penn State University, will release their findings in the March 20, 2017 issue of The Astrophysical Journal Letters.
“The new Hubble observations provide very strong evidence that the three stars were ejected from a multiple-star system,” said Luhman. “Astronomers had previously found a few other examples of fast-moving stars that trace back to multiple-star systems, and therefore were likely ejected. But these three stars are the youngest examples of such ejected stars. They’re probably only a few hundred thousand years old. In fact, based on infrared images, the stars are still young enough to have disks of material leftover from their formation.”
“The Orion Nebula could be surrounded by additional fledging stars that were ejected from it in the past and are now streaming away into space.” – Lead Researcher Kevin Luhman, Penn State University.
The three stars are travelling about 30 times faster than most of the Nebula’s other stellar inhabitants. Theory has predicted the phenomenon of these breakups in regions where newborn stars are crowded together. These gravitational back-and-forths are inevitable. “But we haven’t observed many examples, especially in very young clusters,” Luhman said. “The Orion Nebula could be surrounded by additional fledging stars that were ejected from it in the past and are now streaming away into space.”
The key to this mystery is the recently discovered third star. But this star, the so-called “source x”, was discovered by accident. Luhman is part of a team using the Hubble to hunt for free-floating planets in the Orion Nebula. A comparison of Hubble infrared images from 2015 with images from 1998 showed that source x had changed its position. This indicated that the star was moving at a speed of about 130,000 miles per hour.
Luhmann then re-traced source x’s path and it led to the same position as the other 3 runaway stars 540 years ago: the Kleinmann-Low Nebula.
According to Luhmann, the three stars were most likely ejected from their system due to gravitational fluctuations that should be common in a high-population area of newly-born stars. Two of the stars can come very close together, either forming a tight binary system or even merging. That throws the gravitational parameters of the system out of whack, and other stars can be ejected. The ejection of those stars can also cause fingers of matter to flow out of the system.
As we get more powerful telescopes operating in the infrared, we should be able to clarify exactly what happens in areas of intense star formation like the Orion Nebula and its embedded Kleinmann-Low Nebula. The James Webb Space Telescope should advance our understanding greatly. If that’s the case, then not only will the details of star birth and formation become much clearer, but so will the break up of young families of stars.
We’re always talking about Mars here on the Guide to Space. And with good reason. Mars is awesome, and there’s a fleet of spacecraft orbiting, probing and crawling around the surface of Mars.
The Red Planet is the focus of so much of our attention because it’s reasonably close and offers humanity a viable place for a second home. Well, not exactly viable, but with the right technology and techniques, we might be able to make a sustainable civilization there.
We have the surface of Mars mapped in great detail, and we know what it looks like from the surface.
But there’s another planet we need to keep in mind: Venus. It’s bigger, and closer than Mars. And sure, it’s a hellish deathscape that would kill you in moments if you ever set foot on it, but it’s still pretty interesting and mysterious to visit.
Would it surprise you to know that many spacecraft have actually made it down to the surface of Venus, and photographed the place from the ground? It was an amazing feat of Soviet engineering, and there are some new technologies in the works that might help us get back, and explore it longer.
Today, let’s talk about the Soviet Venera program. The first time humanity saw Venus from its surface.
Back in the 60s, in the height of the cold war, the Americans and the Soviets were racing to be the first to explore the Solar System. First satellite to orbit Earth (Soviets), first human to orbit Earth (Soviets), first flyby and landing on the Moon (Soviets), first flyby of Mars (Americans), first flyby of Venus (Americans), etc.
The Soviets set their sights on putting a lander down on the surface of Venus. But as we know, this planet has some unique challenges. Every place on the entire planet measures the same 462 degrees C (or 864 F).
Furthermore, the atmospheric pressure on the surface of Venus is 90 times greater than Earth. Being down at the bottom of that column of atmosphere is the same as being beneath a kilometer of ocean on Earth. Remember those submarine movies where they dive too deep and get crushed like a soda can?
Finally, it rains sulphuric acid. I mean, that’s really irritating.
Needless to say, figuring this out took the Soviets a few tries.
Their first attempts to even flyby Venus was Venera 1, on February 4, 1961. But it failed to even escape Earth orbit. This was followed by Venera 2, launched on November 12, 1965, but it went off course just after launch.
Venera 3 blasted off on November 16, 1965, and was intended to land on the surface of Venus. The Soviets lost communication with the spacecraft, but it’s believed it did actually crash land on Venus. So I guess that was the first successful “landing” on Venus?
Before I continue, I’d like to talk a little bit about landing on planets. As we’ve discussed in the past, landing on Mars is really really hard. The atmosphere is thick enough that spacecraft will burn up if you aim directly for the surface, but it’s not thick enough to let you use parachutes to gently land on the surface.
Landing on the surface of Venus on the other hand, is super easy. The atmosphere is so thick that you can use parachutes no problem. If you can get on target and deploy a parachute capable of handling the terrible environment, your soft landing is pretty much assured. Surviving down there is another story, but we’ll get to that.
Venera 4 came next, launched on June 12, 1967. The Soviet scientists had few clues about what the surface of Venus was actually like. They didn’t know the atmospheric pressure, guessing it might be a little higher pressure than Earth, or maybe it was hundreds of times our pressure. It was tested with high temperatures, and brutal deceleration. They thought they’d built this thing plenty tough.
Venera 4 arrived at Venus on October 18, 1967, and tried to survive a landing. Temperatures on its heat shield were clocked at 11,000 C, and it experienced 300 Gs of deceleration.
The initial temperature 52 km was a nice 33C, but then as it descended down towards the surface, temperatures increased to 262 C. And then, they lost contact with the probe, killed dead by the horrible temperature.
We can assume it landed, though, and for the first time, scientists caught a glimpse of just how bad it is down there on the surface of Venus.
Venera 5 was launched on January 5, 1969, and was built tougher, learning from the lessons of Venera 4. It also made it into Venus’ atmosphere, returned some interested science about the planet and then died before it reached the surface.
Venera 6 followed, same deal. Built tougher, died in the atmosphere, returned some useful science.
Venera 7 was built with a full understanding of how bad it was down there on Venus. It launched on August 17, 1970, and arrived in December. It’s believed that the parachutes on the spacecraft only partially deployed, allowing it to descend more quickly through the Venusian atmosphere than originally planned. It smacked into the surface going about 16.5 m/s, but amazingly, it survived, and continued to send back a weak signal to Earth for about 23 minutes.
For the first time ever, a spacecraft had made it down to the surface of Venus and communicated its status. I’m sure it was just 23 minutes of robotic screaming, but still, progress. Scientists got their first accurate measurement of the temperatures, and pressure down there.
Bottom line, humans could never survive on the surface of Venus.
Venera 8 blasted off for Venus on March 17, 1972, and the Soviet engineers built it to survive the descent and landing as long as possible. It made it through the atmosphere, landed on the surface, and returned data for about 50 minutes. It didn’t have a camera, but it did have a light sensor, which told scientists being on Venus was kind of like Earth on an overcast day. Enough light to take pictures… next time.
For their next missions, the Soviets went back to the drawing board and built entirely new landing craft. Built big, heavy and tough, designed to get to the surface of Venus and survive long enough to send back data and pictures.
Venera 9 was launched on June 8, 1975. It survived the atmospheric descent and landed on the surface of Venus. The lander was built like a liquid cooled reverse insulated pressure vessel, using circulating fluid to keep the electronics cooled as long as possible. In this case, that was 53 minutes. Venera 9 measured clouds of acid, bromine and other toxic chemicals, and sent back grainy black and white television pictures from the surface of Venus.
In fact, these were the first pictures ever taken from the surface of another planet.
Venera 10 lasted for 65 minutes and took pictures of the surface with one camera. The lens cap on a second camera didn’t release. The spacecraft saw lava rocks with layers of other rocks in between. Similar environments that you might see here on Earth.
Venera 11 was launched on September 9, 1975 and lasted for 95 minutes on the surface of Venus. In addition to confirming the horrible environment discovered by the other landers, Venera 11 detected lightning strikes in the vicinity. It was equipped with a color camera, but again, the lens cap failed to deploy for it or the black and white camera. So it failed to send any pictures home.
Venera 12 was launched on September 14, 1978, and made it down to the surface of Venus. It lasted 110 minutes and returned detailed information about the chemical composition of the atmosphere. Unfortunately, both its camera lens caps failed to deploy, so no pictures were returned. And pictures are what we really care about, right?
Venera 13 was built on the same tougher, beefier design, and was blasted off to Venus on October 30, 1981, and this one was a tremendous success. It landed on Venus and survived for 127 minutes. It took pictures of its surroundings using two cameras peering through quartz windows, and saw a landscape of bedrock. It used spring-loaded arms to test out how compressible the soil was.
Venera 14 was identical and launched just 5 days after Venera 13. It also landed and survived for 57 minutes. Unfortunately, its experiment to test the compressibility of the soil was a botch because one of its lens caps landed right under its spring-loaded arm. But apart from that, it sent back color pictures of the hellish landscape.
And with that, the Soviet Venus landing program ended. And since then, no additional spacecraft have ever returned to the surface of Venus.
It’s one thing for a lander to make it to the surface of Venus, last a few minutes and then die from the horrible environment. What we really want is some kind of rover, like Curiosity, which would last on the surface of Venus for weeks, months or even years and do more science.
And computers don’t like this kind of heat. Go ahead, put your computer in the oven and set it to 850. Oh, your oven doesn’t go to 850, that’s fine, because it would be insane. Seriously, don’t do that, it would be bad.
Engineers at NASA’s Glenn Research Center have developed a new kind of electrical circuitry that might be able to handle those kinds of temperatures. Their new circuits were tested in the Glenn Extreme Environments Rig, which can simulate the surface of Venus. It can mimic the temperature, pressure and even the chemistry of Venus’ atmosphere.
The circuitry, originally designed for hot jet engines, lasted for 521 hours, functioning perfectly. If all goes well, future Venus rovers could be developed to survive on the surface of Venus without needing the complex and short lived cooling systems.
This discovery might unleash a whole new era of exploration of Venus, to confirm once and for all that it really does suck.
While the Soviets had a tough time with Mars, they really nailed it with Venus. You can see how they built and launched spacecraft after spacecraft, sticking with this challenge until they got the pictures and data they were looking for. I really think this series is one of the triumphs of robotic space exploration, and I look forward to future mission concepts to pick up where the Soviets left off.
Are you excited about the prospects of exploring Venus with rovers? Let me know your thoughts in the comments.
We humans have an insatiable hunger to understand the Universe. As Carl Sagan said, “Understanding is Ecstasy.” But to understand the Universe, we need better and better ways to observe it. And that means one thing: big, huge, enormous telescopes.
In this series we’ll look at 6 of the world’s Super Telescopes:
The James Webb Space Telescope“>James Webb Space Telescope (JWST, or the Webb) may be the most eagerly anticipated of the Super Telescopes. Maybe because it has endured a tortured path on its way to being built. Or maybe because it’s different than the other Super Telescopes, what with it being 1.5 million km (1 million miles) away from Earth once it’s operating.
If you’ve been following the drama behind the Webb, you’ll know that cost overruns almost caused it to be cancelled. That would’ve been a real shame.
The JWST has been brewing since 1996, but has suffered some bumps along the road. That road and its bumps have been discussed elsewhere, so what follows is a brief rundown.
Initial estimates for the JWST were a $1.6 billion price tag and a launch date of 2011. But the costs ballooned, and there were other problems. This caused the House of Representatives in the US to move to cancel the project in 2011. However, later that same year, US Congress reversed the cancellation. Eventually, the final cost of the Webb came to $8.8 billion, with a launch date set for October, 2018. That means the JWST’s first light will be much sooner than the other Super Telescopes.
The Webb was envisioned as a successor to the Hubble Space Telescope, which has been in operation since 1990. But the Hubble is in Low Earth Orbit, and has a primary mirror of 2.4 meters. The JWST will be located in orbit at the LaGrange 2 point, and its primary mirror will be 6.5 meters. The Hubble observes in the near ultraviolet, visible, and near infrared spectra, while the Webb will observe in long-wavelength (orange-red) visible light, through near-infrared to the mid-infrared. This has some important implications for the science yielded by the Webb.
The Webb’s Instruments
The James Webb is built around four instruments:
The Near-Infrared Camera (NIRCam)
The Near-Infrared Spectrograph (NIRSpec)
The Mid-Infrared Instrument(MIRI)
The Fine Guidance Sensor/ Near InfraRed Imager and Slitless Spectrograph (FGS/NIRISS)
The NIRCam is Webb’s primary imager. It will observe the formation of the earliest stars and galaxies, the population of stars in nearby galaxies, Kuiper Belt Objects, and young stars in the Milky Way. NIRCam is equipped with coronagraphs, which block out the light from bright objects in order to observe dimmer objects nearby.
NIRSpec will operate in a range from 0 to 5 microns. Its spectrograph will split the light into a spectrum. The resulting spectrum tells us about an objects, temperature, mass, and chemical composition. NIRSpec will observe 100 objects at once.
MIRI is a camera and a spectrograph. It will see the redshifted light of distant galaxies, newly forming stars, objects in the Kuiper Belt, and faint comets. MIRI’s camera will provide wide-field, broadband imaging that will rank up there with the astonishing images that Hubble has given us a steady diet of. The spectrograph will provide physical details of the distant objects it will observe.
The Fine Guidance Sensor part of FGS/NIRISS will give the Webb the precision required to yield high-quality images. NIRISS is a specialized instrument operating in three modes. It will investigate first light detection, exoplanet detection and characterization, and exoplanet transit spectroscopy.
The Science
The over-arching goal of the JWST, along with many other telescopes, is to understand the Universe and our origins. The Webb will investigate four broad themes:
First Light and Re-ionization: In the early stages of the Universe, there was no light. The Universe was opaque. Eventually, as it cooled, photons were able to travel more freely. Then, probably hundreds of millions of years after the Big Bang, the first light sources formed: stars. But we don’t know when, or what types of stars.
How Galaxies Assemble: We’re accustomed to seeing stunning images of the grand spiral galaxies that exist in the Universe today. But galaxies weren’t always like that. Early galaxies were often small and clumpy. How did they form into the shapes we see today?
The Birth of Stars and Protoplanetary Systems: The Webb’s keen eye will peer straight through clouds of dust that ‘scopes like the Hubble can’t see through. Those clouds of dust are where stars are forming, and their protoplanetary systems. What we see there will tell us a lot about the formation of our own Solar System, as well as shedding light on many other questions.
Planets and the Origins of Life: We now know that exoplanets are common. We’ve found thousands of them orbiting all types of stars. But we still know very little about them, like how common atmospheres are, and if the building blocks of life are common.
These are all obviously fascinating topics. But in our current times, one of them stands out among the others: Planets and the Origins of Life.
The recent discovery the TRAPPIST 1 system has people excited about possibly discovering life in another solar system. TRAPPIST 1 has 7 terrestrial planets, and 3 of them are in the habitable zone. It was huge news in February 2017. The buzz is still palpable, and people are eagerly awaiting more news about the system. That’s where the JWST comes in.
One big question around the TRAPPIST system is “Do the planets have atmospheres?” The Webb can help us answer this.
The NIRSpec instrument on JWST will be able to detect any atmospheres around the planets. Maybe more importantly, it will be able to investigate the atmospheres, and tell us about their composition. We will know if the atmospheres, if they exist, contain greenhouse gases. The Webb may also detect chemicals like ozone and methane, which are biosignatures and can tell us if life might be present on those planets.
You could say that if the James Webb were able to detect atmospheres on the TRAPPIST 1 planets, and confirm the existence of biosignature chemicals there, it will have done its job already. Even if it stopped working after that. That’s probably far-fetched. But still, the possibility is there.
Launch and Deployment
The science that the JWST will provide is extremely intriguing. But we’re not there yet. There’s still the matter of JWST’s launch, and it’s tricky deployment.
The JWST’s primary mirror is much larger than the Hubble’s. It’s 6.5 meters in diameter, versus 2.4 meters for the Hubble. The Hubble was no problem launching, despite being as large as a school bus. It was placed inside a space shuttle, and deployed by the Canadarm in low earth orbit. That won’t work for the James Webb.
The Webb has to be launched aboard a rocket to be sent on its way to L2, it’s eventual home. And in order to be launched aboard its rocket, it has to fit into a cargo space in the rocket’s nose. That means it has to be folded up.
The mirror, which is made up of 18 segments, is folded into three inside the rocket, and unfolded on its way to L2. The antennae and the solar cells also need to unfold.
Unlike the Hubble, the Webb needs to be kept extremely cool to do its work. It has a cryo-cooler to help with that, but it also has an enormous sunshade. This sunshade is five layers, and very large.
We need all of these components to deploy for the Webb to do its thing. And nothing like this has been tried before.
The Webb’s launch is only 7 months away. That’s really close, considering the project almost got cancelled. There’s a cornucopia of science to be done once it’s working.
But we’re not there yet, and we’ll have to go through the nerve-wracking launch and deployment before we can really get excited.
As Carl Sagan said, “Understanding is Ecstasy.” But in order to understand the Universe, we need better and better ways to observe it. And that means one thing: big, huge, enormous telescopes.
In this series, we’ll look at six Super Telescopes being built:
The Thirty Meter Telescope (TMT) is being built by an international group of countries and institutions, like a lot of Super Telescopes are. In fact, they’re proud of pointing out that the international consortium behind the TMT represents almost half of the world’s population; China, India, the USA, Japan, and Canada. The project needs that many partners to absorb the cost; an estimated $1.5 billion.
The heart of any of the world’s Super Telescopes is the primary mirror, and the TMT is no different. The primary mirror for the TMT is, obviously, 30 meters in diameter. It’s a segmented design consisting of 492 smaller mirrors, each one a 1.4 meter hexagon.
The light collecting capability of the TMT will be 10 times that of the Keck Telescope, and more than 144 times that of the Hubble Space Telescope.
But the TMT is more than just an enormous ‘light bucket.’ It also excels with other capabilities that define a super telescope’s effectiveness. One of those is what’s called diffraction-limited spatial resolution (DLSR).
When a telescope is pointed at distant objects that appear close together, the light from both can scatter enough to make the two objects appear as one. Diffraction-limited spatial resolution means that when a ‘scope is observing a star or other object, none of the light from that object is scattered by defects in the telescope. The TMT will more easily distinguish objects that are close to each other. When it comes to DLSR, the TMT will exceed the Keck by a factor of 3, and will exceed the Hubble by a factor of 10 at some wavelengths.
Crucial to the function of large, segmented mirrors like the TMT is active optics. By controlling the shape and position of each segment, active optics allows the primary mirror to compensate for changes in wind, temperature, or mechanical stress on the telescope. Without active optics, and its sister technology adaptive optics, which compensates for atmospheric disturbance, any telescope larger than about 8 meters would not function properly.
The TMT will operate in the near-ultraviolet, visible, and near-infrared wavelengths. It will be smaller than the European Extremely Large Telescope (E-ELT), which will have a 39 meter primary mirror. The E-ELT will operate in the optical and infrared wavelengths.
The world’s Super Telescopes are behemoths. Not just in the size of their mirrors, but in their mass. The TMT’s moving mass will be about 1,420 tonnes. Moving the TMT quickly is part of the design of the TMT, because it must respond quickly when something like a supernova is spotted. The detailed science case calls for the TMT to acquire a new target within 5 to 10 minutes.
This requires a complex computer system to coordinate the science instruments, the mirrors, the active optics, and the adaptive optics. This was one of the initial challenges of the TMT project. It will allow the TMT to respond to transient phenomena like supernovae when spotted by other telescopes like the Large Synoptic Survey Telescope.
The Science
The TMT will investigate most of the important questions in astronomy and cosmology today. Here’s an overview of major topics that the TMT will address:
The Nature of Dark Matter
The Physics of Extreme Objects like Neutron Stars
Early galaxies and Cosmic Reionization
Galaxy Formation
Super-Massive Black Holes
Exploration of the Milky Way and Nearby Galaxies
The Birth and Early Lives of Stars and Planets
Time Domain Science: Supernovae and Gamma Ray Bursts
Exo-planets
Our Solar System
This is a comprehensive list of topics, to be sure. It leaves very little out, and is a testament to the power and effectiveness of the TMT.
The raw power of the TMT is not in question. Once in operation it will advance our understanding of the Universe on multiple fronts. But the actual location of the TMT could still be in question.
The dispute between some of the Hawaiian people and the TMT has been well-documented elsewhere, but the basic complaint about the TMT is that the top of Mauna Kea is sacred land, and they would like the TMT to be built elsewhere.
The organizations behind the TMT would still like it to be built at Mauna Kea, and a legal process is unfolding around the dispute. During that process, they identified several possible alternate sites for the telescope, including La Palma in the Canary Islands. Universe Today contacted TMT Observatory Scientist Christophe Dumas, PhD., about the possible relocation of the TMT to another site.
Dr. Dumas told us that “Mauna Kea remains the preferred location for the TMT because of its superb observing conditions, and because of the synergy with other TMT partner facilities already present on the mountain. Its very high elevation of almost 14,000 feet makes it the premier astronomical site in the northern hemisphere. The sky above Mauna Kea is very stable, which allows very sharp images to be obtained. It has also excellent transparency, low light pollution and stable cold temperatures that improves sensitivity for observations in the infrared.”
The preferred secondary site at La Palma is home to over 10 other telescopes, but would relocation to the Canary Islands affect the science done by the TMT? Dr. Dumas says that the Canary Islands site is excellent as well, with similar atmospheric characteristics to Mauna Kea, including stability, transparency, darkness, and fraction of clear-nights.
As Dr. Dumas explains, “La Palma is at a lower elevation site and on average warmer than Mauna Kea. These two factors will reduce TMT sensitivity at some wavelengths in the infrared region of the spectrum.”
Dr. Dumas told Universe Today that this reduced sensitivity in the infrared can be overcome somewhat by scheduling different observing tasks. “This specific issue can be partly mitigated by implementing an adaptive scheduling of TMT observations, to match the execution of the most demanding infrared programs with the best atmospheric conditions above La Palma.”
Court Proceedings End
On March 3rd, 44 days of court hearings into the TMT wrapped up. In that time, 71 people testified for and against the TMT being constructed on Mauna Kea. Those against the telescope say that the site is sacred land and shouldn’t have any more telescope construction on it. Those for the TMT spoke in favor of the science that the TMT will deliver to everyone, and the education opportunities it will provide to Hawaiians.
Though construction has been delayed, and people have gone to court to have the project stopped, it seems like the TMT will definitely be built—somewhere. The funding is in place, the design is finalized, and manufacturing of the components is underway. The delays mean that the TMT’s first light is still uncertain, but once we get there, the TMT will be another game-changer, just like the world’s other Super Telescopes.
We humans have an insatiable hunger to understand the Universe. As Carl Sagan said, “Understanding is Ecstasy.” But to understand the Universe, we need better and better ways to observe it. And that means one thing: big, huge, enormous telescopes.
In this series we’ll look at 6 of the world’s Super Telescopes:
While the world’s other Super Telescopes rely on huge mirrors to do their work, the LSST is different. It’s a huge panoramic camera that will create an enormous moving image of the Universe. And its work will be guided by three words: wide, deep, and fast.
While other telescopes capture static images, the LSST will capture richly detailed images of the entire available night sky, over and over. This will allow astronomers to basically “watch” the movement of objects in the sky, night after night. And the imagery will be available to anyone.
The LSST is being built by a group of institutions in the US, and even got some money from Bill Gates. It will be situated atop Cerro Pachon, a peak in Northern Chile. The Gemini South and Southern Astrophysical Research Telescopes are also situated there.
The Camera Inside the ‘Scope
At the heart of the LSST is its enormous digital camera. It weighs over three tons, and the sensor is segmented in a similar way that other Super Telescopes have segmented mirrors. The LSST’s camera is made up of 189 segments, which together create a camera sensor about 2 ft. in diameter, behind a lens that is over 5 ft. in diameter.
Each image that the LSST captures is 40 times larger than the full moon, and will measure 3.2 gigapixels. The camera will capture one of these wide-field images every 20 seconds, all night long. Every few nights, the LSST will give us an image of the entire available night sky, and it will do that for 10 years.
“The LSST survey will open a movie-like window on objects that change brightness, or move, on timescales ranging from 10 seconds to 10 years.” – LSST: FROM SCIENCE DRIVERS TO REFERENCE DESIGN AND ANTICIPATED DATA PRODUCTS
The LSST will capture a vast, movie-like image of over 40 billion objects. This will range from distant, enormous galaxies all the way down to Potentially Hazardous Objects as small as 140 meters in diameter.
There’s a whole other side to the LSST which is a little more challenging. We get the idea of an in-depth, moving, detailed image of the sky. That’s intuitively easy to engage with. But there’s another side, the data mining challenge.
The Data Challenge
The whole endeavour will create an enormous amount of data. Over 15 terabytes will have to be processed every night. Over its 10 year lifespan, it will capture 60 petabytes of data.
Once data is captured by the LSST, it will travel via two dedicated 40 GB lines to the Data Processing and Archive Center. That Center is a super-computing facility that will manage all the data and make it available to users. But when it comes to handling the data, that’s just the tip of the iceberg.
“LSST is a new way to observe, and gaining knowledge from the Big Data LSST delivers is indeed a challenge.” – Suzanne H. Jacoby, LSST
The sheer amount of data created by the LSST is a challenge that the team behind it saw coming. They knew they would have to build the capacity of the scientific community in advance, in order to get the most out of the LSST.
As Suzanne Jacoby, from the LSST team, told Universe today, “To prepare the science community for LSST Operations, the LSST Corporation has undertaken an “Enabling Science” effort which funds the LSST Data Science Fellowship Program (DSFP). This two-year program is designed to supplement existing graduate school curriculum and explores topics including statistics, machine learning, information theory, and scalable programming.”
The Science
The Nature of Dark Matter and Understanding Dark Energy
Contributing to our understanding Dark Energy and Dark Matter is a goal of all of the Super Telescopes. The LSST will map several billion galaxies through time and space. It will help us understand how Dark Energy behaves over time, and how Dark Matter affects the development of cosmic structure.
Cataloging the Solar System
The raw imaging power of the LSST will be a game-changer for mapping and cataloguing our Solar System. It’s thought that the LSST could detect between 60-90% of all potentially hazardous asteroids (PHAs) larger than 140 meters in diameter, as far away as the main asteroid belt. This will not only contribute to NASA’s goal of identifying threats to Earth posed by asteroids, but will help us understand how planets formed and how our Solar System evolved.
Exploring the Changing Sky
The repeated imaging of the night sky, at great depth and with excellent image quality, should tell us a lot about supernovae, variable stars, and possible other events we haven’t even discovered yet. There are always surprising results whenever we build a new telescope or send a probe to a new destination. The LSST will probably be no different.
Milky Way Structure & Formation
The LSST will give us an unprecedented look at the Milky Way. It will survey over half of the sky, and will do so repeatedly. Hundreds of times, in fact. The end result will be an enormously detailed look at the motion of millions of stars in our galaxy.
Open Access
Perhaps the best part of the whole LSST project is that the all of the data will be available to everyone. Anyone with a computer and an internet connection will be able to access LSST’s movie of the Universe. It’s warm and fuzzy, to be sure, to have the results of large science endeavours like this available to anyone. But there’s more to it. The LSST team suspects that the majority of the discoveries resulting from its rich data will come from unaffiliated astronomers, students, and even amateurs.
It was designed from the ground up in this way, and there will be no delay or proprietary barriers when it comes to public data access. In fact, Google has signed on as a partner with LSST because of the desire for public access to the data. We’ve seen what Google has done with Google Earth and Google Sky. What will they come up with for Google LSST?
The Sloan Digital Sky Survey (SDSS), a kind of predecessor to the LSST, was modelled in the same way. All of its data was available to astronomers not affiliated with it, and out of over 6000 papers that refer to SDSS data, the large majority of them were published by astronomers not affiliated with SDSS.
First Light
We’ll have to wait a while for all of this to come our way, though. First light for the LSST won’t be until 2021, and it will begin its 10 year run in 2022. At that time, be ready for a whole new look at our Universe. The LSST will be a game-changer.