Identifying homes with swimming pools is valuable to insurance companies, tax assessors, and public agencies. However, having human analysts collect and scour satellite imagery for pools is too expensive and time consuming. Enter GBDX and PoolNet. GBDX opens the way to easy access to high-resolution, high-quality satellite imagery on the cloud. PoolNet is a classifier we implemented which relies on a convolutional neural network and vast amounts of crowdsourced training data to distinguish properties that contain swimming pools on our satellite imagery. PoolNet achieves a recall of 93% and a precision of 88% and can classify more than 60 properties per second.
A large metropolitan area is comprised of hundreds of thousands of individual property lots. Insurance companies, banks, tax assessors, and many other private companies and public agencies can benefit greatly from the identification of properties that contain pools. Pools vary in shape, size, color, tree-coverage, location within the property and water level. On satellite imagery, they can be confused with blue tarps, trampolines, small buildings, and other features.
As a result, the problem of identifying which properties contain pools, with high accuracy, and at scale, is a very challenging one. On the one hand, having expert human analysts do it is out of the question: it is an accurate solution but it would be prohibitively slow and expensive at scale. On the other hand, deploying machine learning is fast but less accurate than human annotations: visual pattern recognition, which comes naturally to humans, is an extremely difficult skill for computers to learn. Machines struggle to account for high object variability, particularly in cluttered scenes like those encountered in satellite imagery.
To get the best of both worlds, we need to combine accurate human judgement with the speed of machines.
Heaps of crowdsourced data
We used our Crowdsourcing platform Tomnod (www.tomnod.com) to obtain human labeled properties as ‘pool’ and ‘no pool’ in Adelaide, Australia. Since only a very small percentage of the properties typically contain pools (< 6%), we directed our crowd to the properties which were likely to contain pools. This was done by ordering multispectral imagery of the region via GBDX and estimating with a simple algorithm which properties contained pixels matching the spectral profile of a pool. In this manner, we collected tens of thousands of reliably labeled properties in a matter of days.
Convolutional Neural Networks (CNNs) are deep neural networks that are inspired by and roughly mimic the human visual cortex. As such, they are very effective in ‘learning’ from labeled images, using them as training to automatically infer rules and abstract properties for recognizing objects, without explicitly being told what to look for.
PoolNet is a CNN that we implemented based on an existing architecture [http://arxiv.org/pdf/1409.1556.pdf]. PoolNet contains 16 layers of neurons and runs on a single GPU. The input to PoolNet is an array of property pixels, obtained by clipping out the pixels of a pan-sharpened satellite image that correspond to that property. The output is the probability (i.e., how likely it is) that the property contains a pool. The way PoolNet is trained is by progressively feeding it properties with known labels; it then refines its rules in order to best match the known output.
In more detail, CNNs learn by breaking down complicated questions into simpler ones that they can answer at the level of single pixels. To produce a classification, the input image is processed through each layer, with early layers answering very simple and specific questions (e.g., is there a feature that is blue and oval), and later layers searching for more abstract qualities that may indicate the presence of a pool (e.g., is the feature close to a house). As the image ‘progresses’ through the layers, features that are characteristic of pools are illuminated. If enough of these features are present in the image, the network classifies the image as containing a pool.
Here is a cool example!
PoolNet was trained on tens of thousands of labeled properties in under 2 hours. Once trained, it can classify more than 60 properties per second: assuming we wanted to classify 10000000 properties, PoolNet would do the job in under 2 days! Our tests on a sample of 1650 properties indicate a recall of 93% and a precision of 88%. This means that out of 100 properties with pools, PoolNet correctly classifies 93 of them, and out of a 100 properties that PoolNet thinks contain pools, 88 of them actually do.
PoolNet’s accuracy is high but still leaves room for improvement. For example, feeding the near-infrared band to the network could help distinguish water from blue objects such as tarps, thus reducing the false positives and increasing the precision.
We have immediate plans to deploy PoolNet at a continental scale. This involves creating a GBDX PoolNet task which can be deployed by GBDX users on our imagery at scale. The input to the task is a satellite image and a shapefile indicating the property boundaries. The output is the shapefile augmented with a label (‘pool’, ‘no pool’) for each property. GBDX allows us to run thousands of these tasks in parallel.
We are also working on a generic GBDX classifier task based on PoolNet. The task can classify geometries belonging to multiple classes. You can easily imagine the extension of PoolNet to cars, boats, solar panels and other objects of interest.
You can play with the PoolNet code in the following link:
Stay tuned to this blog for updates!