from the algorithmic-bias-and-privacy dept
While we often read about (and most likely experience ourselves) public outrage regarding personal data pulled from websites like Facebook, the news often fails to highlight the staggering amounts of personal data collected by our governments, both directly and indirectly. Outside of the traditional Fourth Amendment protocols for constitutional searches and seizures, personally identifiable information (PII) – information that can be used to potentially identify an individual – is collected when we submit tax returns, apply for government assistance programs or interact with federal and government social media accounts.
Technology has not only expanded governments’ capability to collect and hold onto our data, but has also transformed the ways in which that data is used. It is not uncommon now for entities to collect metadata or data that summarizes and provides information about other data (for example, the author of a file or the date and time the file was last edited). The NSA, for instance, collected metadata from over 500 million calls detailing records during 2017, much of which it did not have the legal authority to collect. Governments now even purchase huge amounts of data from third party tech companies.
The implementation of artificial intelligence tools throughout the government sector has influenced what these entities do with our data. Governments aiming to “reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data the name” have implemented tools like artificial intelligence decision making in both criminal and civil contexts. Algorithms can be effective tools in remedying government inefficiencies, and idealistic champions believe that artificial intelligence can eliminate human and subjective emotions to obtain a logical and “fairer” outcome. Data collected by governments plays a role in developing these tools. Individual data is taken and aggregated into data sets which are then used for algorithmic decision making.
With all this data, what steps do governments take to protect the information they collect from their citizens?
Currently, there are real and valid concerns that governments fail to take the adequate steps necessary to protect and secure data. Take, for instance, the ever-increasing number of data breaches in densely populated cities like New York and Atlanta. In 2018, the city of Atlanta was subjected to a major ransomware attack by an Iranian based group of hackers that shut down major city systems and led to outages that were related to “applications customers use to pay bills or access court related information,” (as per Richard Cox, the city's Chief of Operations at the time). Notably, the city had been heavily criticized for its subpar IT and cybersecurity infrastructure and apathetic attitude towards fixing any vulnerabilities in the city.
While the city claimed there was little evidence that the attack had compromised any of its citizens’ data, this assertion seems unrealistic given the span and length of the attack and the number of systems that were compromised.
Race, Algorithms and Data Privacy
As a current law student, I have given much thought over the last few years to the role of technology as the “great equalizer.” For decades, technology proponents have advocated for increased use in the government sector by highlighting its ability to level the playing field and provide opportunities for success to all, regardless of race, gender or economic income.
However, having gained familiarity with the legal and criminal justice systems, I have begun to see that human racial and gender biases, coupled with government officials’ failure to understand or question technological tools like artificial intelligence, often leads to inequitable results. Further, the allocation of governments funds for technological tools often go to police and prosecution rather than defense and protection of vulnerable communities.
There is a real threat that algorithms do not achieve the intended goals of objectivity and fairness, but further perpetuate the inequalities and biases that already exist within our societies. Artificial intelligence has enabled governments to cultivate “big data” and thus, have added another tool to their arsenals of surveillance technology. “Advances in computational science have created the ability to capture, collect, and combine everyone's digital trails and analyze them in ever finer detail." Through the weaponization of big data, governments can even more easily identify, control, and oppress marginalized groups of people within a society.
As our country currently addresses the decades of systematic racism inherent in our political and societal systems, privacy must be included in the conversation and reform. I believe that data privacy today is regarded as a privilege rather than a right, and this privilege is often reserved for white, middle- and upper class citizens. The complex, confusing and lengthy nature of privacy policies not only requires some familiarity with data privacy and what the government and companies do with data, but also the time, energy and resources to read through the entirety of the document. If the receipt of vital benefits was contingent on my acceptance of a government website privacy policy, I have no doubt that I would accept the terms regardless of how
unfavorable they were to me.
The very notion of the right to privacy in the United States is derived, historically, from white, male, and upper class values. In 1890, Samuel D. Warren and Louis Brandeis (future Supreme Court Justice) penned their famous and often quoted “The Right to Privacy” in the Harvard Law Review. The article was, in fact, a response to the discomfort that accompanied their high-society lives, as the invention of the camera now meant that their parties were captured and displayed prominently in newspapers and tabloid publications.
These men did not intend to include the general population when creating this new right to privacy, but instead aimed to safeguard their own interests. They were not looking to protect the privacy of the most vulnerable populations, but to make sure that the local tabloid didn’t publish any drunk or incriminating photos from the prior night’s party. Even the traditional conception of privacy, which employs physical space and the home to illustrate the public verses private divide, is a biased and elitist concept. Should someone, then, lose their right to privacy if they do not have a home themselves?
In the criminal justice system, how do we know that courts and governments are devoting an adequate amount of resources to secure records and the data of individuals in prison or court? Large portions of budgets are spent on prosecutorial tools, and it seems as though racial biases prevent governments from devoting monetary resources to protect minorities’ data and privacy as they move through the criminal justice system. Governments do not reveal much as to whether they notify prisoners and defendants if data is compromised, so it is clear that these systems must be scrutinized moving forward.
Moving Forward
Moving forward, how do we address race and inequity issues surrounding data privacy and hold our governments accountable? Personally, I think we need to start with better data privacy legislation. Currently, California is the only state with a tangible data privacy law, which should be expanded to the federal level. Limits must be placed on how long governments can hold onto data, what can be done with the data collected, and proper protocols for data destruction must be established. I believe there is a dire need for better cybersecurity legislation that places the burden on government entities to develop cybersecurity protections that exceed the bare minimum.
The few pieces of cybersecurity legislation that do exist tend to be reactive more than proactive, and often utilize ambiguous terms like “reasonable cybersecurity features,” which ultimately give more power to companies and entities to say they did what was reasonable for the situation at the time. Additionally, judges and lawyers need to be held accountable for data protection as well. Because technology is so deeply integrated into the court systems and the entirety of law itself, there should be an ethical and professional code of conduct-based requirement that holds judges and supporting court staff to a standard in which they must actively work to protect data.
We also need to implement better education in schools regarding the importance of data privacy and what governments and companies do with our personal identifying information. Countries throughout the European Union have developed robust programs in schools that focus on teaching the importance of digital privacy and skills. Programs like Poland’s “Your data – your concern” enable the youth to understand and take ownership of their privacy rather than blatantly click “Accept” on a privacy policy. To address economic and racial inequalities, non-profit groups should also aim to integrate these courses into public programming, adult education curricula, and prison educational programs.
Finally, and most importantly, we need to place limits on and reconsider what technological tools both local and federal governments are using and the racial biases inherent in these tools. Because technology can be weaponized to continue oppression, I question whether governments should implement these solutions prior to addressing the underlying systematic racism that already exists within our societies. It is important to remember the algorithms and the outcomes generated – especially in the context of government – reflect existing biases and prejudices in our society. It is clear that governments are not yet willing to accept responsibility for the biases present in the algorithm or strive to protect data regardless of race, gender and income level.
For example, a study conducted on an algorithm used to determine a criminal defendant’s likelihood of reoffending had an 80% error rate in predictions of violent recidivism. Problematically, these errors impacted minority groups significantly more than they did white defendants. The study determined that the algorithm incorrectly determined blacks as re-offenders at almost double the rate that it incorrectly identified white defendants. Because recidivism rates are considered in sentencing and bail determinations, these algorithms disastrously impact minorities’ livelihoods by subjecting them to harsher punishment and more time in prison; individuals lose valuable time and are unable to work or support their families and communities. Until women and minorities have more of a presence in both the government and programming, and can use their diverse perspectives to ensure that algorithms do not contain biases, these technology tools will continue to oppress.
We must now ask if we have succeeded in creating an environment in which these tools can be implemented to help more than cause harm. While I think these tools currently cause more harm, I am hopeful that as our country begins to address and remedy the underlying systemic racism that exists, we can create government systems that can safely implement tools in ways that benefit all.
Chynna Foucek is a rising third year student at Brooklyn Law School, where she focuses on Intellectual Property, Cybersecurity and Data Privacy law.
Filed Under: ai, algorithm bias, algorithms, data privacy, privacy, racism