Can we stop “deepfake geography”?

A new detection method for fake satellite images could help address the emerging threat.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

To get people thinking about the threat of “deepfake geography,” University of Washington (UW) researchers built an AI that could create convincingly realistic satellite images — and then used the images to develop a tool to detect these fakes.

What is deepfake geography: Deepfakes are AI-generated images or videos that look convincingly real.

Usually, they depict real people saying or doing things they didn’t actually say or do (the number of pornographic deepfakes of female celebrities is staggering).

However, the tech can also be used to create convincing fake satellite images, or “deepfake geography.”

Why it matters: It’s long been possible to create fake satellite images, but to create convincing fake images traditionally took artistic skill.

If anyone with a computer can create deepfake geography, the images could be used to spread misinformation or as “evidence” for conspiracy theories.

“Since most satellite images are generated by professionals or governments, the public would usually prefer to believe they are authentic,” lead researcher Bo Zhao told the Verge.

What they did: Researchers are figuring out ways to detect deepfakes of people, but deepfake geography hasn’t gotten the same level of attention.

For their study, the UW researchers first trained an AI to incorporate features from a satellite image of one city into another city — it could create deepfakes of Tacoma with the greenery of Seattle, for example, or the tall buildings of Beijing.

They then created a dataset containing more than 8,000 satellite images — half were real images of Tacoma, Seattle, and Beijing and the other half were deepfakes of Tacoma.

Using that dataset, they developed software that could detect 94% of the deepfakes by focusing on the images’ color, clarity, and other characteristics.

“We used both traditional methods and some of the latest GAN-detection algorithms to try to find some clues in terms of how we can detect the deepfakes,” Zhao told GeekWire.

The cold water: This one tool isn’t going to solve the problem of deepfake geography — there are countless ways for people to tweak satellite images, and the software has a very narrow scope.

However, the researchers are hopeful that their study will at least get people thinking about the possibility of fake satellite images — and that we should start looking for ways to detect them now.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
Has AI made “learn to code” obsolete?
Freethink talks to the creator of the world’s most popular AI coding assistant to find out whether learning to code is still worthwhile.
AI is already in the classroom. It’s time colleges caught up.
Rather than banning AI, schools should adapt by designing assignments that promote responsible use and keep the focus on learning.
Google AI exec: “The mistake would be thinking this is hype.”
Bestselling author and Google Labs’ Editorial Director Steven Johnson talks about the future of AI at Freethink’s Great Progression event.
Siri co-founder: “No matter how smart AI gets, it’s not going to solve all our problems by itself.”
Adam Cheyer, co-founder of Siri and VP of AI Experience at Airbnb, talks about the future of AI at Freethink’s Great Progression event.
A call to innovators in Silicon Valley and beyond to help chart the new way forward
Peter Leyden sums up the key themes and big ideas of his new series at a Freethink Conversation in San Francisco.
Up Next
traffic lights
Subscribe to Freethink for more great stories