Surprising, But Important: Facebook Sorta Shuts Down Its Face Recognition System
from the good-to-see dept
A month ago, I highlighted how Facebook seemed uniquely bad attaking a long term view and publicly committing to doing things that are good for the world, but bad for Facebook in the short run . So it was a bit surprising earlier this week to see Facebook (no I'm not calling it Meta, stop it) announce that it was shutting down its Face Recognition system and (importantly) deleting over a billion "face prints" that it had stored.
The company's announcement on this was (surprisingly!) open about the various trade-offs here, both societally and for Facebook, though (somewhat amusingly) throughout the announcement Facebook repeatedly highlights the supposed societal benefits of its facial recognition.
Making this change required careful consideration, because we have seen a number of places where face recognition can be highly valued by people using platforms. For example, our award-winning automatic alt text system, that uses advanced AI to generate descriptions of images for people who are blind and visually impaired, uses the Face Recognition system to tell them when they or one of their friends is in an image.
[....]
But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.
One interesting tidbit buried in this is that only about 1/3 of Facebook users opted in to use Facebook's facial recognition tool (despite the company pushing it heavily on users). At the very least, it showed that a large number of users weren't comfortable with the technology.
There's also the issue that, while they're turning off the tool and deleting the facial prints, the NY Times notes they're hanging on to the algorithm that was built on all those faces:
Although Facebook plans to delete more than one billion facial recognition templates, which are digital scans of facial features, by December, it will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace. The company has also not ruled out incorporating facial recognition technology into future products, Mr. Grosse said.
That's resulted in some (expected) amount of cynicism from Facebook's critics that Facebook "got what it wanted" and is now moving on. However, I think that's a bit silly. Facebook could have easily kept the facial recognition program going. Of all the regulatory pressures the company is facing, this was way down the list and barely on the radar.
And, to make a bigger point, here's a case where the company is actually doing the right thing: turning off a questionable product and deleting a ton of data it collected. And we should at least encourage both Facebook and other companies to be willing to make that decision based on recognizing the societal risks, and without waiting around until they're forced to do so.
Filed Under: ai, data, deepface, facial recognition, privacy, society, trade-offs
Companies: facebook