Looking to the future, astronomers are excited to see how machine learning – aka. deep learning and artificial intelligence (AI) – will enhance surveys. One field that is already benefitting in the search for extrasolar planets, where researchers rely on machine-learning algorithms to distinguish between faint signals and background noise. As this field continues to transition from discovery to characterization, the role of machine intelligence is likely to become even more critical.
Take the KeplerSpace Telescope, which accounted for 2879 confirmed discoveries (out of the 4,575 exoplanets discovered made to date) during its nearly ten years of service. After examining the data collected by Kepler using a new deep-learning neural network called ExoMiner, a research team at NASA’s Ames Research Center was able to detect 301 more planetary signals and add them to the growing census of exoplanets.
Planetary scientists estimate that each year, about 500 meteorites survive the fiery trip through Earth’s atmosphere and fall to our planet’s surface. Most are quite small, and less than 2% of them are ever recovered. While the majority of rocks from space may not be recoverable due to ending up in oceans or remote, inaccessible areas, other meteorite falls are just not witnessed or known about.
But new technology has upped the number known falls in recent years. Doppler radar has detected meteorite falls, as well as all-sky camera networks specifically on the lookout for meteors. Additionally, increased use of dashcams and security cameras have allowed for more serendipitous sightings and data on fireballs and potential meteorite falls.
A team of researchers is now taking advantage of additional technology advances by testing out drones and machine learning for automated searches for small meteorites. The drones are programmed to fly a grid search pattern in a projected ‘strewn field’ for a recent meteorite fall, taking systematic pictures of the ground over a large survey area. Artificial intelligence is then used to search through the pictures to identify potential meteorites.
Atop the summit of Haleakala on the Hawaiian island of Maui sits the Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS1 (PS1). As part of the Haleakala Observatory overseen by the University of Hawaii, Pan-STARRS1 relies on a system of cameras, telescopes, and a computing facility to conduct an optical imaging survey of the sky, as well as astrometry and photometry of know objects.
In 2018, the University of Hawaii at Manoa’s Institute for Astronomy (IfA) released the PS1 3pi survey, the world’s largest digital sky survey that spanned three-quarters of the sky and encompassed 3 billion objects. And now, a team of astronomers from the IfA have used this data to create the Pan-STARRS1 Source Types and Redshifts with Machine Learning (PS1-STRM), the world’s largest three-dimensional astronomical catalog.
Advances in technology are having a profound impact on astronomy and astrophysics. At one end, we have advanced hardware like adaptive optics, coronographs, and spectrometers that allow for more light to be gathered from the cosmos. At the other end, we have improved software and machine learning algorithms that are allowing for the data to be analyzed and mined for valuable nuggets of information.
One area of research where this is proving to be invaluable is in the hunt for exoplanets and the search for life. At the University of Warwick, technicians recently developed an algorithm that was able to confirm the existence of 50 new exoplanets. When used to sort through archival data, this algorithm was able to sort through a sample of candidates and determine which were actual planets and which were false positives.
Modern professional astronomers aren’t much like astronomers of old. They don’t spend every suitable evening with their eyes glued to a telescope’s eyepiece. You might be more likely to find them in front of a super-computer, working with AI and deep learning methods.
One group of researchers employed those methods to find a whole new collection of stars in the Milky Way; a group of stars which weren’t born here.
As long as human beings have been sending satellites into space, they have been contemplating ways to destroy them. In recent years, the technology behind anti-satellite (ASAT) weapons has progressed considerably. What’s more, the ability to launch and destroy them extends beyond the two traditional superpowers (the US and Russia) to include newcomers like India, China, and others.
For this reason, Sandia National Laboratories – a federal research center headquartered in New Mexico – has launched a seven-year campaign to develop autonomous satellite protection systems. Known as the Science and Technology Advancing Resilience for Contested Space (STARCS), this campaign will fund the creation of hardware and software that will allow satellites to defend themselves.
In 2023, NASA plans to launch the Europa Clipper mission, a robotic explorer that will study Jupiter’s enigmatic moon Europa. The purpose of this mission is to explore Europa’s ice shell and interior to learn more about the moon’s composition, geology, and interactions between the surface and subsurface. Most of all, the purpose of this mission is to shed light on whether or not life could exist within Europa’s interior ocean.
This presents numerous challenges, many of which arise from the fact that the Europa Clipper will be very far from Earth when it conducts its science operations. To address this, a team of researchers from NASA’s Jet Propulsion Laboratory (JPL) and Arizona State University (ASU) designed a series of machine-learning algorithms that will allow the mission to explore Europa with a degree of autonom.
How in the world could you possibly look inside a star? You could break out the scalpels and other tools of the surgical trade, but good luck getting within a few million kilometers of the surface before your skin melts off. The stars of our universe hide their secrets very well, but astronomers can outmatch their cleverness and have found ways to peer into their hearts using, of all things, sound waves. Continue reading “Scientists are Using Artificial Intelligence to See Inside Stars Using Sound Waves”
A lot of attention has been dedicated to the machine learning technique known as “deep learning”, where computers are capable of discerning patterns in data without being specifically programmed to do so. In recent years, this technique has been applied to a number of applications, which include voice and facial recognition for social media platforms like Facebook.
However, astronomers are also benefiting from deep learning, which is helping them to analyze images of galaxies and understand how they form and evolve. In a new study, a team of international researchers used a deep learning algorithm to analyze images of galaxies from the Hubble Space Telescope. This method proved effective at classifying these galaxies based on what stage they were in their evolution.
In the past, Marc Huertas-Company has already applied deep learning methods to Hubble data for the sake of galaxy classification. In collaboration with David Koo and Joel Primack, both of whom are professor emeritus’ at UC Santa Cruz (and with support from Google), Huertas-Company and the team spent the past two summers developing a neural network that could identify galaxies at different stages in their evolution.
“This project was just one of several ideas we had,” said Koo in a recent USCS press release. “We wanted to pick a process that theorists can define clearly based on the simulations, and that has something to do with how a galaxy looks, then have the deep learning algorithm look for it in the observations. We’re just beginning to explore this new way of doing research. It’s a new way of melding theory and observations.”
For the sake of their study, the researchers used computer simulations to generate mock images of galaxies as they would look in observations by the Hubble Space Telescope. The mock images were used to train the deep learning neural network to recognize three key phases of galaxy evolution that had been previously identified in the simulations. The researchers then used the network to analyze a large set of actual Hubble images.
As with previous images anaylzed by Huertas-Company, these images part of Hubble’s Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) project – the largest project in the history of the Hubble Space Telescope. What they found was that the neural network’s classifications of simulated and real galaxies was remarkably consistent. As Joel Primack explained:
“We were not expecting it to be all that successful. I’m amazed at how powerful this is. We know the simulations have limitations, so we don’t want to make too strong a claim. But we don’t think this is just a lucky fluke.”
The research team was especially interested in galaxies that have a small, dense, star-forming region known as a “blue nugget”. These regions occur early in the evolution of gas-rich galaxies, when big flows of gas into the center of a galaxy cause the formation of young stars that emit blue light. To simulate these and other types of galaxies, the team relied on state-of-the-art VELA simulations developed by Primack and an international team of collaborators.
In both the simulated and observational data, the computer program found that the “blue nugget” phase occurs only in galaxies with masses within a certain range. This was followed by star formation ending in the central region, leading to the compact “red nugget” phase, where the stars in the central region exit their main sequence phase and become red giants.
The consistency of the mass range was exciting because it indicated that the neural network was identifying a pattern that results from a key physical process in real galaxies – and without having to be specifically told to do so. As Koo indicated, this study as a big step forward for astronomy and AI, but a lot of research still needs to be done:
“The VELA simulations have had a lot of success in terms of helping us understand the CANDELS observations. Nobody has perfect simulations, though. As we continue this work, we will keep developing better simulations.”
For instance, the team’s simulations did not include the role played by Active Galactic Nuclei (AGN). In larger galaxies, gas and dust is accreted onto a central Supermassive Black Hole (SMBH) at the core, which causes gas and radiation to be ejected in huge jets. Some recent studies have indicated how this may have an arresting effect on star formation in galaxies.
However, observations of distant, younger galaxies have shown evidence of the phenomenon observed in the team’s simulations, where gas-rich cores lead to the blue nugget phase. According to Koo, using deep learning to study galactic evolution has the potential to reveal previously undetected aspects of observational data. Instead of observing galaxies as snapshots in time, astronomers will be able to simulate how they evolve over billions of years.
“Deep learning looks for patterns, and the machine can see patterns that are so complex that we humans don’t see them,” he said. “We want to do a lot more testing of this approach, but in this proof-of-concept study, the machine seemed to successfully find in the data the different stages of galaxy evolution identified in the simulations.”
In the future, astronomers will have more observation data to analyze thanks to the deployment of next-generation telescopes like the Large Synoptic Survey Telescope (LSST), the James Webb Space Telescope (JWST), and the Wide-Field Infrared Survey Telescope (WFIRST). These telescopes will provide even more massive datasets, which can then be analyzed by machine learning methods to determine what patterns exist.
Astronomy and artificial intelligence, working together to better our understanding of the Universe. I wonder if we should put it on the task of finding a Theory of Everything (ToE) too!