The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Is Data Privacy A Privilege? The Racial Implications Of Technology Based Tools In Government

from the algorithmic-bias-and-privacy dept

While we often read about (and most likely experience ourselves) public outrage regarding personal data pulled from websites like Facebook, the news often fails to highlight the staggering amounts of personal data collected by our governments, both directly and indirectly. Outside of the traditional Fourth Amendment protocols for constitutional searches and seizures, personally identifiable information (PII) – information that can be used to potentially identify an individual – is collected when we submit tax returns, apply for government assistance programs or interact with federal and government social media accounts.

Technology has not only expanded governments’ capability to collect and hold onto our data, but has also transformed the ways in which that data is used. It is not uncommon now for entities to collect metadata or data that summarizes and provides information about other data (for example, the author of a file or the date and time the file was last edited). The NSA, for instance, collected metadata from over 500 million calls detailing records during 2017, much of which it did not have the legal authority to collect. Governments now even purchase huge amounts of data from third party tech companies.

The implementation of artificial intelligence tools throughout the government sector has influenced what these entities do with our data. Governments aiming to “reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data the name” have implemented tools like artificial intelligence decision making in both criminal and civil contexts. Algorithms can be effective tools in remedying government inefficiencies, and idealistic champions believe that artificial intelligence can eliminate human and subjective emotions to obtain a logical and “fairer” outcome. Data collected by governments plays a role in developing these tools. Individual data is taken and aggregated into data sets which are then used for algorithmic decision making.

With all this data, what steps do governments take to protect the information they collect from their citizens?

Currently, there are real and valid concerns that governments fail to take the adequate steps necessary to protect and secure data. Take, for instance, the ever-increasing number of data breaches in densely populated cities like New York and Atlanta. In 2018, the city of Atlanta was subjected to a major ransomware attack by an Iranian based group of hackers that shut down major city systems and led to outages that were related to “applications customers use to pay bills or access court related information,” (as per Richard Cox, the city's Chief of Operations at the time). Notably, the city had been heavily criticized for its subpar IT and cybersecurity infrastructure and apathetic attitude towards fixing any vulnerabilities in the city.

While the city claimed there was little evidence that the attack had compromised any of its citizens’ data, this assertion seems unrealistic given the span and length of the attack and the number of systems that were compromised.

Race, Algorithms and Data Privacy

As a current law student, I have given much thought over the last few years to the role of technology as the “great equalizer.” For decades, technology proponents have advocated for increased use in the government sector by highlighting its ability to level the playing field and provide opportunities for success to all, regardless of race, gender or economic income.

However, having gained familiarity with the legal and criminal justice systems, I have begun to see that human racial and gender biases, coupled with government officials’ failure to understand or question technological tools like artificial intelligence, often leads to inequitable results. Further, the allocation of governments funds for technological tools often go to police and prosecution rather than defense and protection of vulnerable communities.

There is a real threat that algorithms do not achieve the intended goals of objectivity and fairness, but further perpetuate the inequalities and biases that already exist within our societies. Artificial intelligence has enabled governments to cultivate “big data” and thus, have added another tool to their arsenals of surveillance technology. “Advances in computational science have created the ability to capture, collect, and combine everyone's digital trails and analyze them in ever finer detail." Through the weaponization of big data, governments can even more easily identify, control, and oppress marginalized groups of people within a society.

As our country currently addresses the decades of systematic racism inherent in our political and societal systems, privacy must be included in the conversation and reform. I believe that data privacy today is regarded as a privilege rather than a right, and this privilege is often reserved for white, middle- and upper class citizens. The complex, confusing and lengthy nature of privacy policies not only requires some familiarity with data privacy and what the government and companies do with data, but also the time, energy and resources to read through the entirety of the document. If the receipt of vital benefits was contingent on my acceptance of a government website privacy policy, I have no doubt that I would accept the terms regardless of how
unfavorable they were to me.

The very notion of the right to privacy in the United States is derived, historically, from white, male, and upper class values. In 1890, Samuel D. Warren and Louis Brandeis (future Supreme Court Justice) penned their famous and often quoted “The Right to Privacy” in the Harvard Law Review. The article was, in fact, a response to the discomfort that accompanied their high-society lives, as the invention of the camera now meant that their parties were captured and displayed prominently in newspapers and tabloid publications.

These men did not intend to include the general population when creating this new right to privacy, but instead aimed to safeguard their own interests. They were not looking to protect the privacy of the most vulnerable populations, but to make sure that the local tabloid didn’t publish any drunk or incriminating photos from the prior night’s party. Even the traditional conception of privacy, which employs physical space and the home to illustrate the public verses private divide, is a biased and elitist concept. Should someone, then, lose their right to privacy if they do not have a home themselves?

In the criminal justice system, how do we know that courts and governments are devoting an adequate amount of resources to secure records and the data of individuals in prison or court? Large portions of budgets are spent on prosecutorial tools, and it seems as though racial biases prevent governments from devoting monetary resources to protect minorities’ data and privacy as they move through the criminal justice system. Governments do not reveal much as to whether they notify prisoners and defendants if data is compromised, so it is clear that these systems must be scrutinized moving forward.

Moving Forward

Moving forward, how do we address race and inequity issues surrounding data privacy and hold our governments accountable? Personally, I think we need to start with better data privacy legislation. Currently, California is the only state with a tangible data privacy law, which should be expanded to the federal level. Limits must be placed on how long governments can hold onto data, what can be done with the data collected, and proper protocols for data destruction must be established. I believe there is a dire need for better cybersecurity legislation that places the burden on government entities to develop cybersecurity protections that exceed the bare minimum.

The few pieces of cybersecurity legislation that do exist tend to be reactive more than proactive, and often utilize ambiguous terms like “reasonable cybersecurity features,” which ultimately give more power to companies and entities to say they did what was reasonable for the situation at the time. Additionally, judges and lawyers need to be held accountable for data protection as well. Because technology is so deeply integrated into the court systems and the entirety of law itself, there should be an ethical and professional code of conduct-based requirement that holds judges and supporting court staff to a standard in which they must actively work to protect data.

We also need to implement better education in schools regarding the importance of data privacy and what governments and companies do with our personal identifying information. Countries throughout the European Union have developed robust programs in schools that focus on teaching the importance of digital privacy and skills. Programs like Poland’s “Your data – your concern” enable the youth to understand and take ownership of their privacy rather than blatantly click “Accept” on a privacy policy. To address economic and racial inequalities, non-profit groups should also aim to integrate these courses into public programming, adult education curricula, and prison educational programs.

Finally, and most importantly, we need to place limits on and reconsider what technological tools both local and federal governments are using and the racial biases inherent in these tools. Because technology can be weaponized to continue oppression, I question whether governments should implement these solutions prior to addressing the underlying systematic racism that already exists within our societies. It is important to remember the algorithms and the outcomes generated – especially in the context of government – reflect existing biases and prejudices in our society. It is clear that governments are not yet willing to accept responsibility for the biases present in the algorithm or strive to protect data regardless of race, gender and income level.

For example, a study conducted on an algorithm used to determine a criminal defendant’s likelihood of reoffending had an 80% error rate in predictions of violent recidivism. Problematically, these errors impacted minority groups significantly more than they did white defendants. The study determined that the algorithm incorrectly determined blacks as re-offenders at almost double the rate that it incorrectly identified white defendants. Because recidivism rates are considered in sentencing and bail determinations, these algorithms disastrously impact minorities’ livelihoods by subjecting them to harsher punishment and more time in prison; individuals lose valuable time and are unable to work or support their families and communities. Until women and minorities have more of a presence in both the government and programming, and can use their diverse perspectives to ensure that algorithms do not contain biases, these technology tools will continue to oppress.

We must now ask if we have succeeded in creating an environment in which these tools can be implemented to help more than cause harm. While I think these tools currently cause more harm, I am hopeful that as our country begins to address and remedy the underlying systemic racism that exists, we can create government systems that can safely implement tools in ways that benefit all.

Chynna Foucek is a rising third year student at Brooklyn Law School, where she focuses on Intellectual Property, Cybersecurity and Data Privacy law.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, algorithm bias, algorithms, data privacy, privacy, racism


Reader Comments

Subscribe: RSS

View by: Thread


  1. icon
    Upstream (profile), 24 Jun 2020 @ 2:11pm

    Biased programmers, using biased data sets, will create biased programs, which will then yield biased results. It is just a variation on the old 'garbage in, garbage out' (GIGO) problem. You must cure the root of this problem before the flowers will be non-poisonous.

    Nothing should be allowed to be contingent on a 'we can violate your privacy' policy, lest everything be contingent on one.

    Maybe getting hacked should constitute prima facie evidence that 'reasonable cybersecurity features' were not in place? Maybe it should at least shift the burden to the hackee to demonstrate that proper, and reasonably effective, measures were in place. Of course, as time goes by, what is considered proper and reasonable will be constantly evolving. Anyone holding PII should be expected to keep up.

    While I believe the right to privacy is a natural human right, any specific notions put forth by upper-class white males (or anyone else, for that matter), that they want applied to themselves and their own privacy, should also be broadly applicable to everyone.

    Encryption, currently under heavy attack, should be the default, and should be required, everywhere, for everything, whenever possible. Are there even any situations where data encryption would not be possible?

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 24 Jun 2020 @ 6:25pm

    Word.

    link to this | view in thread ]

  3. identicon
    bobob, 25 Jun 2020 @ 10:06am

    Data privacy is a privilige only if enough people are too complacent to insist on it being a right.

    link to this | view in thread ]

  4. icon
    Navi (profile), 28 Jun 2020 @ 10:57pm

    tech developments

    Recent tech developments such as artificial intelligence (AI) and blockchain are progressively being used by governments to improve the efficiency of the services they offer. For example, blockchain technologies can allow government to keep important and vital records protected and confidential within a secure ledger.

    <a href="https://tirenavi.jp/">tirenavi.jp</a&gt;

    link to this | view in thread ]


Follow Techdirt
Essential Reading

New To Techdirt?

Explore some core concepts:

read all »

Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.