There’s a burgeoning arms race between Artificial Intelligence (AI) deepfake images and the methods used to detect them. The latest advancement on the detection side comes from astronomy. The intricate methods used to dissect and understand light in astronomical images can be brought to bear on deepfakes.
The word ‘deepfakes’ is a portmanteau of ‘deep learning’ and ‘fakes.’ Deepfake images are called that because they’re made with a certain type of AI called deep learning, itself a subset of machine learning. Deep learning AI can mimic something quite well after being shown many examples of what it’s being asked to fake. When it comes to images, deepfakes usually involve replacing the existing face in an image with a second person’s face to make it look like someone else is in a certain place, in the company of certain people, or engaging in certain activities.
Deepfakes are getting better and better, just like other forms of AI. But as it turns out, a new tool to uncover deepfakes already exists in astronomy. Astronomy is all about light, and the science of teasing out minute details in light from extremely distant and puzzling objects is developing just as rapidly as AI.
In a new article in Nature, science journalist Sarah Wild looked at how researchers are using astronomical methods to uncover deepfakes. Adejumoke Owolabi is a student at the University of Hull in the UK who studies data science and computer vision. Her Master’s Thesis focused on how light reflected in eyeballs should be consistent, though not identical, between left and right. Owolabi used a high-quality dataset of human faces from Flickr and then used an image generator to create fake faces. She then compared the two using two different astronomical measurement systems called the CAS system and the Gini index to compare the light reflected in the eyeballs and to determine which were deepfakes.
CAS stands for concentration, asymmetry, and smoothness, and astronomers have used it for decades to study and quantify the light from extragalactic stars. It’s also used to quantify the light from entire galaxies and has made its way into biology and other areas where images need to be carefully examined. Noted astrophysicist Christopher J. Conselice was a key proponent of using CAS in astronomy.
The Gini index, or Gini coefficient, is also used to study galaxies. It’s named after the Italian statistician Corrado Gini, who developed it in 1912 to measure income inequality. Astronomers use it to measure how light is spread throughout a galaxy and whether it’s uniform or concentrated. It’s a tool that helps astronomers determine a galaxy’s morphology and classification.
In her research, Owolabi successfully determined which images were fake 70% of the time.
For her article, Wild spoke with Kevin Pimbblet, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull in the UK. Pimblett presented the research at the UK Royal Astronomical Society’s National Astronomy Meeting on July 15th.
“It’s not a silver bullet, because we do have false positives and false negatives,” said Pimbblet. “But this research provides a potential method, an important way forward, perhaps to add to the battery of tests that one can apply to try to figure out if an image is real or fake.”
This is a promising development. Open democratic societies are prone to disinformation attacks from enemies without and within. Public figures are prone to similar attacks. Disturbingly, the majority of deepfakes are pornographic and can depict public figures in private and sometimes degrading situations. Anything that can help combat it and bolster civil society is a welcome tool.
But as we know from history, arms races have no endpoint. They go on and on in an escalating series of countermeasures. Look at how the USA and the USSR kept one-upping each other during their nuclear arms race as warhead sizes reached absurd levels of destructive power. So, inasmuch as this work shows promise, the purveyors of deepfakes will learn from it and improve their AI deepfake methods.
Wild also spoke to Brant Robertson in her article. Robertson is an astrophysicist at the University of California, Santa Cruz, who studies astrophysics and astronomy, including big data and machine learning. “However, if you can calculate a metric that quantifies how realistic a deepfake image may appear, you can also train the AI model to produce even better deepfakes by optimizing that metric,” he said, confirming what many can predict.
This isn’t the first time that astronomical methods have intersected with Earthly issues. When the Hubble Space Telescope was developed, it contained a powerful CCD (charge-coupled device.) That technology made its way into a digital mammography biopsy system. The system allowed doctors to take better images of breast tissue and identify suspicious tissue without a physical biopsy. Now, CCDs are at the heart of all of our digital cameras, including on our mobile phones.
Might our internet browsers one day contain a deepfake detector based on Gini and CAS? How would that work? Would hostile actors unleash attacks on those detectors and then flood our media with deepfake images in an attempt to weaken our democratic societies? It’s the nature of an arms race.
It’s also in our nature to use deception to sway events. History shows that rulers with malevolent intent can more easily deceive populations that are in the grip of powerful emotions. AI deepfakes are just the newest tool at their disposal.
We all know that AI has downsides, and deepfakes are one of them. While their legality is fuzzy, as with many new technologies, we’re starting to see efforts to combat them. The United States government acknowledges the problem, and several laws have been proposed to deal with it. The “DEEPFAKES Accountability Act” was introduced in the US House of Representatives in September 2023. The “Protecting Consumers from Deceptive AI Act” is another related proposal. Both are floundering in the sometimes murky world of subcommittees for now, but they might breach the surface and become law eventually. Other countries and the EU are wrestling with the same issue.
But in the absence of a comprehensive legal framework dealing with AI deepfakes, and even after one is established, detection is still key.
Astronomy and astrophysics could be an unlikely ally in combatting them.