Breaking News

Facebook Used Your Instagram Photos To Train Its Image Recognition AI: Another Privacy Issue?


Barely out of its data privacy scandal rut, Facebook might have another issue on its hands: Instagram. Or more specifically, how it collects photos for enhancing its in-house artificial intelligence.
Facebook just revealed that it uses 3.5 billion photos publicly shared on Instagram to train its image recognition software. The method works incredibly well too — Facebook says that thanks to this model, it was able to yield an 85.4 percent accuracy rate when testing its software on ImageNet, a popular benchmark dataset.
That effectively renders Facebook's AI model the best image recognition system in the world, according to Facebook's director of applied machine learning, Srinivas Narayanan.
The results were unveiled onstage during the company's F8 developer conference in California, the same event where it announced the budget Oculus Go along with a bunch of updates headed to MessengerInstagram, WhatsApp, and others.

Facebook's Image Recognition AI

Facebook's research makes its image recognition technology go beyond recognizing objects. It can specify subsets too. For instance, a photo of a food isn't just food — it's Indian or Italian food. Same goes for birds, for people in costumes or uniform, and other objects. This level of intelligence could make for a better experience on Facebook, says computer vision research lead Manohar Paluri.
"You can put it into descriptions or captioning for blind people, or visual search, or enforcing platform policies."
So how does Facebook do it? First, it had to develop systems for finding relevant hashtags on Instagram, which ultimately led the team in creating what they call a "large-scale hashtag prediction model." Facebook essentially relies on hashtags to comb through billions of images, and that's a challenge because not all hashtags are representative of the actual content of the photo. For example, a person can tag their post with #GoldenGateBridge even if the bridge itself isn't clearly seen.
As a result, Facebook chose to go with what's called weakly supervised learning, which employs a mix of labeled and unlabeled datasets. It constructs predictive models by learning from a vast amount of training examples.

But What About Privacy?

Let's discuss the elephant in the room here: privacy. Going through people's Instagram posts to train its image recognition technology sounds like another potential data scandal. Interestingly, Facebook only uses public photo data. That means private accounts are excluded, but think of this: when a user posts a photo, do they know their content could be used to train deep learning models?
That's a crucial question to ask the social network, especially at a time of great crisis owing to the company's failure to handle data properly. Better image recognition is, of course, something that will lend users better tools moving forward, but they also have to ask at what cost.

ليست هناك تعليقات