Federal Watchdog Finds Lots Of Facial Recognition Use By Gov't Agencies, Very Little Internal Oversight
from the getting-a-real-'Wild-West'-vibe-from-this dept
Facial recognition tech remains controversial, given its tendency to produce false positives and send cops after the wrong people. Private companies offering even sketchier tech than what's already in use (looking at you, Clearview) have made everything worse.
The upside is this state of affairs has prompted at least one federal government oversight entity to do some actual oversight. The Government Accountability Office (GAO) has released its report [PDF] on federal agencies' use of facial recognition tech and it contains a couple of surprises and, unfortunately, several of the expected disappointments. (via the Washington Post)
For instance, while we expect law enforcement agencies like the FBI, DEA, ATF, and TSA to use facial recognition tech, the report notes that a total of 20 agencies own or use the tech. That list also includes some unexpected agencies, like the IRS, US Postal Service, the FDA, and NASA.
There's also a surprising number of Clearview users among federal agencies, which seems unwise given the company's history for being sued, investigated, exposed as dishonest, and just kind of terrible in every way. Of the 20 agencies that admitted using this tech, ten have used or have contracts with Clearview, outpacing other third-party offerings by a 2-to-1 margin.
What are these agencies using this tech for? Mainly criminal investigations.
According to the FBI, the system has been used for investigations of violent crimes, credit card and identity fraud, missing persons, and bank robberies, among others. The Department of Homeland Security’s Office of Biometric Identity Management offers a similar service to its partners (e.g., U.S. Immigration and Customs Enforcement). Specifically, the agency’s Automated Biometric Identification System can be used to search a photo of an unknown individual and provide potential matches (i.e., generate leads) to support criminal investigations. Federal agencies also reported using state, local, and non-government systems to support criminal investigations.
This includes people who may have committed criminal acts during last summer's nationwide anti-police violence protests. One of the agencies on this list is the US Postal Inspection Service, which used Clearview to identify suspects who damaged USPS property or stole mail. The US Capitol Police also used Clearview to "generate leads" following the January 6th attack on the US Capitol.
That's what's known. There's a lot that's unknown, thanks to federal agencies apparently not caring who's doing what with whatever facial recognition tech they have access to.
Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees. These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Most federal agencies that reported using non-federal systems did not own systems. Thus, employees were relying on systems owned by other entities, including non-federal entities, to support their operations.
Yay! Your federal tax dollars at work putting citizens at risk of being misidentified right into holding cells or deportation or whatever. The less you know, I guess. Some agencies had to "poll" employees to figure out how often this tech had been used, something that relies on honest self-reporting for accuracy. Literally any other system would provide better data, including the old standby "making some shit up."
Then there's mind-boggling stuff like this:
Officials from another agency initially told us that its employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches.
The line between "we don't do this" and "we do this pretty much nonstop" is finer than I thought.
The CBP, which has used this tech for years, says it's still "in the process of implementing a mechanism to track" use of non-federal facial recognition systems for employees. So far, the CBP has come up with nothing better than hanging up a couple of clipboards.
According to U.S. Immigration and Customs Enforcement officials, in November 2020 they were in the process of developing a list of approved facial recognition technologies that employees can use. In addition, log-in sheets will be made available to employees, allowing supervisors to monitor employee use of the technologies.
Behold the awesome power of the CBP, utilizing its billions in budget to send someone to Office Depot with a $20 bill and telling them to bring back change and a receipt.
In addition to being careless and cavalier about the use and deployment of unproven tech, the sullen shrugs of these thirteen government agencies are also possibly admissions of criminal activity.
When agencies use facial recognition technology without first assessing the privacy implications and applicability of privacy requirements, there is a risk that they will not adhere to privacy-related laws, regulations, and policies. There is also a risk that non-federal system owners will share sensitive information (e.g. photo of a suspect) about an ongoing investigation with the public or others.
The GAO closes its depressing report with 26 recommendations -- thirteen of them being "start tracking this stuff, you dolts." The second -- which makes two recommendations per failing federal agency -- is to assess the risks of the tech, including possible violations of privacy laws and the negative side effects of these systems misidentifying people.
There's no good news in this report. Agencies are using unproven, sometimes completely unvetted tech without internal or external oversight. They've rolled out these programs well ahead of required Privacy Impact Assessments or internal tracking/reporting measures in place. The only pleasant surprise is that this hasn't resulted in more false arrests and detainments. But that definitely can't be attributed to the care and diligence of agencies using this tech because the GAO really wasn't able to find much evidence of that. But this does put the issue on the radar of Congress members who haven't been paying much attention to this tech's drift towards ubiquity.
Filed Under: 4th amendment, accountability, facial recognition, federal government, gao, oversight, surveillance