Can we stop “deepfake geography”?

A new detection method for fake satellite images could help address the emerging threat.

To get people thinking about the threat of “deepfake geography,” University of Washington (UW) researchers built an AI that could create convincingly realistic satellite images — and then used the images to develop a tool to detect these fakes.

What is deepfake geography: Deepfakes are AI-generated images or videos that look convincingly real.

Usually, they depict real people saying or doing things they didn’t actually say or do (the number of pornographic deepfakes of female celebrities is staggering).

However, the tech can also be used to create convincing fake satellite images, or “deepfake geography.”

Why it matters: It’s long been possible to create fake satellite images, but to create convincing fake images traditionally took artistic skill.

If anyone with a computer can create deepfake geography, the images could be used to spread misinformation or as “evidence” for conspiracy theories.

“Since most satellite images are generated by professionals or governments, the public would usually prefer to believe they are authentic,” lead researcher Bo Zhao told the Verge.

What they did: Researchers are figuring out ways to detect deepfakes of people, but deepfake geography hasn’t gotten the same level of attention.

For their study, the UW researchers first trained an AI to incorporate features from a satellite image of one city into another city — it could create deepfakes of Tacoma with the greenery of Seattle, for example, or the tall buildings of Beijing.

They then created a dataset containing more than 8,000 satellite images — half were real images of Tacoma, Seattle, and Beijing and the other half were deepfakes of Tacoma.

Using that dataset, they developed software that could detect 94% of the deepfakes by focusing on the images’ color, clarity, and other characteristics.

“We used both traditional methods and some of the latest GAN-detection algorithms to try to find some clues in terms of how we can detect the deepfakes,” Zhao told GeekWire.

The cold water: This one tool isn’t going to solve the problem of deepfake geography — there are countless ways for people to tweak satellite images, and the software has a very narrow scope.

However, the researchers are hopeful that their study will at least get people thinking about the possibility of fake satellite images — and that we should start looking for ways to detect them now.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Boston Dynamics retires dancing Atlas robot — and debuts its electric replacement
A day after retiring its hydraulic Atlas robot, Boston Dynamics released a video debuting its all-electric, workplace-ready replacement.
Why a neurodivergent team will be a golden asset in the AI workplace
Since AI is chained to linear reasoning, workplaces that embrace it will do well to have neurodivergent colleagues who reason more creatively.
When an antibiotic fails: MIT scientists are using AI to target “sleeper” bacteria
Most antibiotics target metabolically active bacteria, but AI can help efficiently screen compounds that are lethal to dormant microbes.
OpenAI and Microsoft are reportedly planning a $100B supercomputer
Microsoft is reportedly planning to build a $100 billion data center and supercomputer, called “Stargate,” for OpenAI.
Can we stop AI hallucinations? And do we even want to?
“Making stuff up” and “being creative” may be two sides of the same coin — but you have to be able to tell the difference.
Up Next
traffic lights
Subscribe to Freethink for more great stories