Can we stop “deepfake geography”?

A new detection method for fake satellite images could help address the emerging threat.

To get people thinking about the threat of “deepfake geography,” University of Washington (UW) researchers built an AI that could create convincingly realistic satellite images — and then used the images to develop a tool to detect these fakes.

What is deepfake geography: Deepfakes are AI-generated images or videos that look convincingly real.

Usually, they depict real people saying or doing things they didn’t actually say or do (the number of pornographic deepfakes of female celebrities is staggering).

However, the tech can also be used to create convincing fake satellite images, or “deepfake geography.”

Why it matters: It’s long been possible to create fake satellite images, but to create convincing fake images traditionally took artistic skill.

If anyone with a computer can create deepfake geography, the images could be used to spread misinformation or as “evidence” for conspiracy theories.

“Since most satellite images are generated by professionals or governments, the public would usually prefer to believe they are authentic,” lead researcher Bo Zhao told the Verge.

What they did: Researchers are figuring out ways to detect deepfakes of people, but deepfake geography hasn’t gotten the same level of attention.

For their study, the UW researchers first trained an AI to incorporate features from a satellite image of one city into another city — it could create deepfakes of Tacoma with the greenery of Seattle, for example, or the tall buildings of Beijing.

They then created a dataset containing more than 8,000 satellite images — half were real images of Tacoma, Seattle, and Beijing and the other half were deepfakes of Tacoma.

Using that dataset, they developed software that could detect 94% of the deepfakes by focusing on the images’ color, clarity, and other characteristics.

“We used both traditional methods and some of the latest GAN-detection algorithms to try to find some clues in terms of how we can detect the deepfakes,” Zhao told GeekWire.

The cold water: This one tool isn’t going to solve the problem of deepfake geography — there are countless ways for people to tweak satellite images, and the software has a very narrow scope.

However, the researchers are hopeful that their study will at least get people thinking about the possibility of fake satellite images — and that we should start looking for ways to detect them now.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
See how Moderna is using OpenAI tech across its workforce
A partnership between Moderna and OpenAI provides a real-world example of what can happen when a company leans into generative AI.
Shining a light on oil fields to make them more sustainable
Sensors and analytics give oil well operators real-time alerts when things go wrong, so they can respond before they become disasters.
OpenAI’s GPT-4 outperforms doctors in another new study
OpenAI’s most powerful AI model, GPT-4, outperformed junior doctors in deciding how to treat patients with eye problems.
Watch the first AI vs. human dogfight using military jets
An AI fighter pilot faced off against a human pilot in a “dogfight” using actual planes — a huge milestone in military automation.
AI can help predict whether a patient will respond to specific tuberculosis treatments
Instead of a one-size-fits-all treatment approach, AI could help personalize treatments for each patient to provide the best outcomes.
Up Next
traffic lights
Subscribe to Freethink for more great stories