How AI is turning satellite imagery into a window on the future
What can a picture from space tell you? “You're likely to have a drought here that might lead to civil unrest.”
Satellite image providers say that new artificial intelligence tools, coupled with more and faster satellite data, will enable image providers to much better anticipate events of geopolitical significance and notify customers and operators of impending crises.
“Analysis is really great, but it's mainly retroactive, a forensic capability of looking back in time,” Planet CEO William Marshall said in an interview last week. “In principle, generative AI models…can leverage satellite data to predict what is likely to happen: ‘You're likely to have a drought here that might lead to civil unrest.’”
Today, relatively simple AI processes such as machine learning can pick out things like cars or ships, but identifying trends across large amounts of imagery remains a heavily human endeavor. Analysis of some image sets—say, to understand where an adversary force might attempt to stage an invasion—can take months.
Planet, whose satellite imagery helped the world understand the preparations for and execution of Russia’s 2022 invasion of Ukraine, has long experimented with various artificial intelligence models. But Marshall said that recent breakthroughs in large language models promise to enable AI to do more and more complex analysis—and far faster than humans.
“We're going with these large language models, I think is more and more towards getting that sort of accuracy within minutes or so—if you've already got the imagery,” he said.
Troy Tomon, Planet’s senior vice president of product and software engineering, said models can be trained not just to make sense of a given data set but to help humans find data relevant to their problems.
“Nobody really wants to see the same place on Earth every day,” Tomon said. “What they want to know is when someplace they care about has some event that happens that’s interesting to them. The change in economic activity, a change in the health of crops, a change in the soil and its properties and how it relates to particular applications. And so what we're finding is that AI is giving us a way to take all of this data and begin to turn it into insights more quickly.”
But he cautioned that the work of getting AI to save you work is itself a lot of work, requiring experts and operators to continuous training models—not just type prompts.
The National Geospatial-Intelligence Agency sees these kinds of AI capabilities as “absolutely the future,” said Mark Munsell, who leads the agency’s Data and Digital Innovation Directorate.
“Essentially, you use a sensor to detect what's happening on the Earth and based on that detection, you can either use it solely to inform future collection, or you can fuse it with other information,” Munsell said. “You can fuse it with open-source information and news reporting. You can fuse it with what we understand about the area, the activity in the area, and then you can have new insight that will tip and cue new collection. So it's just a cycle that's in constant improvement of itself.”
Already, he said, the proliferation of earth-imaging satellites and the advent of new and better AI models are reducing the time it takes to collect and analyze images.
Another step forward, according to real-time, space-based intelligence company BlackSky, is to bring AI into the process even before the imagery is distributed to customers.
. “Most other companies that are applying AI, the way that they do it: you go into their historical archive and you run AI on old images,” said BlackSky CTO Patrick O'Neil. “So if you want to go count cars from an image from five days ago, that's fine. What we did is, we inserted our AI into the same process that we use to deliver images. So we form an image after the data comes down from the satellite, and then we will run AI and push out the results to our users. That means you get that intelligence right away, and you can do a lot with that information, because now that's machine-readable information.”
O'Neil said that yet another promising avenue is AI-powered collaboration between satellites. BlackSky can revisit a spot more than 15 times a day, which can allow analysts to carefully watch how situations on the ground are developing. But much of the time, watching a specific place is less important than tracking the objects or entities that move through and beyond that space. BlackSky hopes that establishing communications between satellites will enable a new era of smarter satellite image collection, in which entire satellite constellations—with minimal human guidance—follow key objects where they go on Earth.
Such a system would allow a human operator to task the system to, say, follow a particular truck or ship. The human could choose to be notified when the ship does “weird stuff” or not, O’Neil said.
In some scenarios, he said, “if you wait for a human to review it, it might be too late, the ship may have already moved away. And so that's a fully automated system where you don't want any latency introduced, and you want it to be completely autonomous. And that, by the way, is where we think the world is going to end up with this.”
Planet is also looking to use satellite-to-satellite communication to improve collection, analysis, and continuous tracking and has a project with NASA commercial services to develop just that, Marshall said.
But Planet also wants to go one step further and make the satellites themselves smarter. Last month, Planet officials announced that they were working with Nvidia to put Jetson graphics processing units aboard Planet Pelican-2 satellites slated to launch later this year. That will allow some rudimentary AI in space. The hope is that onboard processing will give analysts on the ground a head start, which could be critical in out-maneuvering an adversary.
Munsell described AI aboard satellites, coupled with mesh networking in space, as the next stage of development among satellite companies.
“I think all of this is going to drive more compute, higher bandwidth, more processing done in space,” he said, and compared it to a household Ring camera.
“You're not going to have your Ring camera downloaded to your computer to do the processing, and, five minutes later, turn around and tell you what's happening. You want to do it as fast as possible,” he said. “You also want your main camera to mesh with the other cameras in the area, so that they all can benefit from the knowing the activities that are happening across the board.”