Deep Belief Networks at Heart of NASA Image Classification

Deep learning algorithms have pushed image recognition and classification to new heights over the last few years, and those same approaches are now being moved into more complex image classification areas, including satellite imagery. Without making light of the complexity of say, picking out a single face in a crowd of many thousands, satellite data has its own diverse array of challenges—enough of them to outpace what is currently used for general image recognition at the likes of Facebook, Google, and elsewhere.

Spurred by the need for neural networks capable of tackling vast wells of high-res satellite data, a team from the NASA Advanced Supercomputing Division at NASA Ames have sought a new blend of deep learning techniques that can build on existing neural nets to create something robust enough for satellite datasets. The problem, however, is that to do so, they first had to create their own fully labeled training datasets—a terabyte-class problem, which was only one small step for deep image classification.

Until the recent Ames work, there was not a single fully labeled dataset with multiple class labels to work from. But armed with two of these, and a new slant on deep belief networks, they are breathing new life into an established deep learning knowledge base and proving how a new model for deep learning is proving itself at scale for NASA’s terabytes of near-range satellite data—and potentially for other areas as the work expands. What is interesting is that in searching for the best classification mechanism, the team was able to provide some in-depth insight into how different deep learning approaches stack up against one another in terms of accuracy, at least for this use case.

The two new labeled satellite datasets were put to the test with a modified deep belief networks driven approach ultimately. The results show classification accuracy of 97.95%, which performed better than the unmodified pure deep belief networks, convolutional neural networks, and stacked de-noising auto-encoders by around 11%.

NASA_sampletilesAs the NASA Ames researchers note, classification of terabytes of high-resolution, highly diverse (due to collection, pre-processing, filtering, etc.) satellite imagery comes with its own host of issues. “Traditional supervised learning methods like random forests do not generalize well for such a large-scale learning problem.” While there have been efforts to recognize roads and other features in aerial imagery, the NASA satellite imagery needs at a far higher resolution and come with far greater complexity.

Enter deep belief networks, which sound quite a bit more existential than they actually are. Although for researchers wrangling terabyte to petabyte class image processing problems, they do have a divine quality given their ability to train and rebuild from high volume, high-resolution datasets–and several of them at once.

From a higher level, deep belief networks are on offshoot of the larger tree of neural networks–not the same thing, but from the same family. They can be trained in an unsupervised manner and using the model developed, can reconstruct a similar pattern on a new dataset, or be used as a classifier in supervised learning. While this definition only scrapes the surface of how these work, the relevant bit is that the networks are trained one layer at a time, and can be used across very large and noisy datasets. For example, something as small as a piece of handwriting means a very large deep learning to get to the point of handwriting recognition—something that deep belief networks are well-equipped to handle.

There is quite a bit for the system to train and learn from the main dataset the training datasets were pulled from–a massive survey of 330,000 scenes across the U.S.. The average image tiles are around 6000 pixels wide and 7000 in height, which means each weighs in around 200 MB each. This adds up quickly—the entire dataset for this sample was close to 65 terabytes with ground sample distance of one meter. These were condensed down to 500,000 image patches for one of the two deep learning sets with a range of landscape features, of which one-fifth was used for the training dataset. The training dataset was then put through both supervised and unsupervised training before working against the newly created NASA datasets.

There are detailed results for various deep learning approaches against the NASA dataset here as well as descriptions for the relative strength and weaknesses of each for the complex image recognition task at scale. This use case is interesting in what in means for the future of imagery, but it also shows that this branch of neural networks is ripe for further enhancement and exploration–scaling to new levels of deep recognition for a host of potential other applications outside of research. For instance, instead of a face in a crowd of many of thousands, imagine a very small spot in the galaxy exhibiting certain characteristics.

It is for this reason we’re continuing to watch how neural networks and related approaches, including deep belief networks (the two are related, but take different routes to classification) as they start to slowly supplant traditional supercomputing applications in “big science” areas, including astronomy, as well as their application in other scientific fields where high-resolution imagery is required.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.