Facebook is using your Instagram photos to train its image recognition AI

In the race to continue house more sophisticated AI deep learning models, Facebook has a secret weapon: billions of images on Instagram.

In research the company is presenting today at F8, Facebook details how it took what amounted to billions of public Instagram photos that had been annotated by users with hashtags and used that data to train their own image recognition models. They relied on hundreds of GPUs running around the clock to parse the data, but were ultimately leave behind deep learn models that beat industry benchmarks, the best of which achieved 85.4 percentage accuracy on ImageNet.

If you’ve ever put a few hashtags onto an Instagram photo, you’ll know doing so isn’t exactly a research-grade process. There is generally some sort of technique to why users tag an image with a specific hashtag; current challenges for Facebook was sorting what was relevant across billions of images.

When you’re operating at this scale — the largest of the tests employed 3.5 billion Instagram images spanning 17,000 hashtags — even Facebook doesn’t have the resources to closely supervise the data. While other image recognition benchmarks may rely on millions of photos that human beings have pored through and annotated personally, Facebook had to find methods to clean up what users had submitted that they could do at scale.

The ” pre-training ” research focused on developing systems for determining relevant hashtags; that entailed discovering which hashtags were synonymous while also learning to prioritize more specific hashtags over the more general ones. This ultimately led to what the research group called the” large-scale hashtag prediction model .”

The privacy implications here are interesting. On one hand, Facebook is merely using what amounts to public data( no private accounts ), but when a user posts an Instagram photo, how aware are they that they’re also contributing to a database that’s training deep learning models for a tech mega-corp? These are the questions of 2018, but they’re also issues that Facebook is undoubtedly growing more sensitive to out of self-preservation.

It’s worth noting that the product of these models was centered on the more object-focused image recognition. Facebook won’t be able to use this data to predict who your #mancrushmonday is and it also isn’t utilizing the database to finally understand what makes a photo #lit. It can tell dog breeds, plants, food and plenty of other things that it’s grabbed from WordNet.

The accuracy from utilizing this data isn’t necessarily the impressive portion here. The increases in image recognition accuracy merely were a couple of phases in many of the tests, but what’s fascinating are the pre-training procedures that turned noisy data that was this vast into something effective while being weakly trained. The models this data trained is likely to be pretty universally helpful to Facebook, but image recognition has the potential to bring users better search and accessibility tools, as well as strengthening Facebook’s efforts to combat abuse on their platform.

Make sure to visit: CapGeneration.com

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s