Activists Cheer On EU's 'Right To An Explanation' For Algorithmic Decisions, But How Will It Work When There's Nothing To Explain?

from the not-so-easy dept

I saw a lot of excitement and happiness a week or so ago around some reports that the EU's new General Data Protection Regulations (GDPR) might possibly include a "right to an explanation" for algorithmic decisions. It's not clear if this is absolutely true, but it's based on a reading of the agreed upon text of the GDPR, which is scheduled to go into effect in two years.
Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them.
Lots of people on Twitter seemed to be cheering this on. And, indeed, at first glance it sounds like a decent idea. As we've just discussed recently, there has been a growing awareness of the power and faith placed in algorithms to make important decisions, and sometimes those algorithms are dangerously biased in ways that can have real consequences. Given that, it seems like a good idea to have a right to find out the details of why an algorithm decided the way it did.

But it also could get rather tricky and problematic. One of the promises of machine learning and artificial intelligence these days is the fact that we no longer fully understand why algorithms are deciding things the way they do. While it applies to lots of different areas of AI and machine learning, you can see it in the way that AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end. The more machine learning "learns" the less possible it is for people to directly understand why it's making those decisions. And while that may be scary to some, it's also how the technology advances.

So, yes, there are lots of concerns about algorithmic decision making -- especially when it can have a huge impact on people's lives, but a strict "right to an explanation" seems like it may actually create limits on machine learning and AI in Europe -- potentially hamstringing projects by requiring them to be limited to levels of human understanding. The full paper on this does more or less admit this possibility, but suggests that it's okay in the long run, because the transparency aspect will be more important.
There is of course a tradeoff between the representational capacity of a model and its interpretability, ranging from linear models (which can only represent simple relationships but are easy to interpret) to nonparametric methods like support vector machines and Gaussian processes (which can represent a rich class of functions but are hard to interpret). Ensemble methods like random forests pose a particular challenge, as predictions result from an aggregation or averaging procedure. Neural networks, especially with the rise of deep learning, pose perhaps the biggest challenge—what hope is there of explaining the weights learned in a multilayer neural net with a complex architecture?
In the end though, the authors think these challenges can be overcome.
While the GDPR presents a number of problems for current applications in machine learning they are, we believe, good problems to have. The challenges described in this paper emphasize the importance of work that ensures that algorithms are not merely efficient, but transparent and fair.
I do think greater transparency is good, but I worry about rules that might hold back useful innovations as well. Prescribing exactly how machine learning and AI needs to work too early in the process may be a problem as well. I don't think there are necessarily easy answers here -- in fact, this is definitely a thorny problem -- so it will be interesting to see how this plays out in practice once the GDPR goes into effect.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, algorithms, eu, gdpr, machine learning, right to an explanation


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Freddie Rodgers, 8 Jul 2016 @ 8:56pm

    Everybody calm

    Explaining would be likely snowballs into bells and whiteboards nobody knows if x + 2 * jack cheese is really a sum. I for one look beneath the tractor tires for what's left of my smartphone. If they don't, we still have Fridays to look disgruntled or confused.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Jul 2016 @ 10:37pm

    This is how I feel when I play chess against a (good) computer. It destroys me.

    Also studies have shown that computers are better than humans at diagnosing illnesses. So why are humans still doing this?

    link to this | view in chronology ]

    • icon
      Richard (profile), 10 Jul 2016 @ 3:01am

      Re:

      Also studies have shown that computers are better than humans at diagnosing illnesses. So why are humans still doing this?
      Because the studies use their own definition of "better".

      link to this | view in chronology ]

  • icon
    orbitalinsertion (profile), 9 Jul 2016 @ 3:28am

    The amount of useful algorithms ever used in a useful and decent manner will probably be so small as to be worth explaining as far as possible, and otherwise explaining what is inexplicable. The personalized marketing, advertising, and content sorting, along with their invasive, commoditized data gathering can fuck right off. Of course, the push in the EU, otherwise, seems to be the opposite, which is catering to BS whining from companies about how data protection makes things soooo haaarrrrd. No one is good with data. They don't secure it, properly anonymize what should be anonymized, and use it for manipulative purposes. Let's see the amazing algorithms and non-abusive uses of data first. There are some pretty interesting things one can do already, only those things mostly aren't in any kind of general use and made available to the public. Never mind the increasing data and algorithmic processing with respect to the rise of the hideously awful IoT, or what governments can demand or slurp off the data or processed data. Really, if this might impede some innovation, pretty much so be it. If it's all that good, you'll be able to explain it satisfactorily. I'd opt for more of this, as long as we don't have control over how our data is commoditized or what algorithms (mysterious human thinking included) affect us. If i don't care about an explanation, i'll just opt in. Oh wait, that isn't a choice we get to make either.

    link to this | view in chronology ]

  • identicon
    anon, 9 Jul 2016 @ 3:59am

    It's great news

    It's great news to have a right to an explanation.

    The argument that computer algorithms cannot show valid reasoning is no excuse.

    If - for example - a bank refuses me a loan, they can't hide behind 'the computer said so'. If the bank cannot show the reasoning of the computer because it's a mystery to them, the bank has to employ a human who can show why I wouldn't apply.

    That way I have a fair chance at a fair treatment.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2016 @ 7:19am

      Re: It's great news

      Oh sure, we can originate a loan for you sir ... it will be at an API of 40%.


      Ahhh, the reason for such an interest rate is because we are greedy basturds without a conscience.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2016 @ 10:56am

      Re: It's great news

      What about if the reasoning the computer does happen to come up to is race related based on the computer's history with questions involving race?

      What if one of the questions the bank asks is race related and later on the computer determines that people of a certain race are more likely to default on their loans. What if later the computer decides to partly hold that against some people of that race that apply for a loan because the computer is just applying its algorithms.

      What if these computer algorithms end up using race to partly determine someone's likelihood of being a suspect to a crime? Or to a specific type of a crime?

      Now if a computer were making medical diagnostics there would be no problems with it using race as a diagnostic question to figure out if race is statistically associated with a particular diagnostic. But if the computer were using it as a loan qualification indicator or a crime suspect indicator ...

      link to this | view in chronology ]

  • identicon
    Anonymous Cowherd, 9 Jul 2016 @ 5:19am

    If you can't explain the reasoning behind your decisions, your decisions can't be considered reasonable. This requirement does not go away just because there's a computer involved.

    link to this | view in chronology ]

    • icon
      Hephaestus (profile), 9 Jul 2016 @ 1:13pm

      Re:

      Neural nets, evolutionary algorithms, and deep learning are inherently difficult to understand. Explaining why they came to a particular decision is almost impossible. When the model is sufficiently complex, and continuously updating itself based on the current input(s), it becomes impossible to describe the rationale, for a given decision after the fact, as the "state" of the machine will have changed.

      Basically, the EU just shot itself in the foot where AI is concerned.

      https://en.wikipedia.org/wiki/Genetic_algorithm
      https://en.wikipedia.org/wiki/Artificial_neural_netwo rk
      https://en.wikipedia.org/wiki/Deep_learning

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 10 Jul 2016 @ 3:45am

        Re: Re:

        If a decision is impossible to understand then it is impossible to trust. AI like that should be shot in the foot.

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 10 Jul 2016 @ 6:25am

        Re: Re: Inherently simple

        These kinds of systems are inherently simple to understand. The range of solutions is very small. In effect, the process is a simple iterative solution with a selective change criteria. The number of iterations may be huge as in more than 7, but is still computationally quite small.

        When the combinations required exceed the number of atoms in the universe by many orders of magnitude then we can consider them complex.

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 10 Jul 2016 @ 7:49am

        Re: Neural nets, evolutionary algorithms, and deep learning

        Great point,

        Explaining an algorithm to you average joe in any fashion to which he is likely to understand it is a rather expensive prospect.

        Few people understand this stuff in any meaningful way. The ones who are getting paid to study it are pretty much all working for the dark side. Those who understand some of it and are with the rebellion are likely to have difficulty finding ways to monetize technologies that are inherently designed to preserve civil rights.

        Based on recent history, there is even some question as to whether product releases of civil rights oriented technologies are likely to get their authors locked up.

        IMHO micro targeting creates a predisposition for psychological feedback loops. The way these are generated are not predictable, since a persons interest in something is easy to distinguish, but the emotion driving that interest isn't.

        For example, I regard a lot of drug company advertising as being disturbing enough to regard them it as assault. There is no question that these are engineered with the intent of being as disturbing as they are, which makes the assault premeditated.

        But if I reference any of them online, algorithms will decide to send me MORE advertising of that nature. Which, if I was unhinged, might result in the sellers of those products coming into a greater need of them.

        So this loop is:

        assault -> consumer complaint -> more intense assault

        This is fairly simple. But the cumulative effects of all such social media ego-bait loops cannot be reasonably predicted. But it does seem heavily weighted towards maximally leveraging base behaviors.

        Or IOW, it isn't unreasonable to suggest that micro targeting is parasitical to civilization itself.

        link to this | view in chronology ]

  • identicon
    Anonymous Cowherd, 9 Jul 2016 @ 5:28am

    Or, in other words, inexplicable guesses, hunches and gut feelings don't suddently become reasonable basis for decision-making just because it's a computer that's having them.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Jul 2016 @ 7:21am

    Your drone just decided to blast me a new asshole, care to tell me why?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Jul 2016 @ 8:05am

    Are we really building system where even the program itself can't articulate its own criteria?

    There are only so many inputs into a system. If nothing else, a company should be able to explain what those inputs are - what the program could possibly be considering.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2016 @ 9:19am

      Re:

      Basically, in machine learning, the "inputs" are just huge amounts of raw data, and the outputs are patterns that the learning network has found. Sometimes those patterns aren't obvious to a human, given the data size and how it's being analyzed. Machine learning may not always look at data in a way that makes logical sense to humans, and that's part of the reason why it's such a powerful tool.

      In the end, the computer basically "has a hunch" that there's a pattern because of all the data it's looked at.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 9 Jul 2016 @ 9:57am

        Re: Re:

        That is also a way of creating self fulfilling prophecies, through false correlations.

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 9 Jul 2016 @ 11:18am

        Re: Re:

        There are many problems with this too though.

        As everyone always says garbage in garbage out.

        In my medicine example above you can theoretically create a medical doctor computer that makes everyone sign up with it with an account.

        Upon signing up you enter a bunch of data into a computer. Medical records, known allergies, symptoms, race, gender, birth date, diet, and you answer a bunch of questions and follow up questions.

        Later on the computer makes a recommendation. That recommendation could be something as simple as a diet change, maybe you have a vitamin deficiency. Or it might be something like a drug.

        However then the computer can answer follow up questions. How well did the recommendation work in the short run. In the long run. What were the short term and long term side effects.

        As more and more people enter all this data the computer can start to use it to make better and better recommendations based on input data.

        The problem with trying to diagnose whether or not someone is guilty or not based on a computer algorithm is how do you have an honest follow up question session? Your inputs are basically much more limited. The computer can only really determine if someone with these characteristics under these circumstances are likely to be convicted if prosecuted but the problem is are those convictions the result of the person actually being guilty or are they the result of biases due to human guilt judgement. The computer doesn't actually know if a specific person is guilty or not, the only thing it can ever know is if a person with certain characteristics under certain situations is likely to be convicted if tried. and that itself is subject to all kinds of human biases.

        For instance say the person being tried is of a particular race. Lets assume that the jury is of that same race. It could be that people of a particular race judging the guilt of someone of their own race are more likely to give a guilty verdict than people of another race judging the guilt of someone of their own race regardless of whether the person on trial actually committed the crime. What you can't tell the computer as a follow up question is whether or not the person is actually guilty or not, you can only give them the results of the trial, and so the computer itself can only use that data to figure out the likelihood of a guilty verdict not whether or not the person actually committed a crime since the computer doesn't actually know. Hence garbage in garbage out, if convictions of a specific type are garbage then the computer is going to return garbage relative to actual guilt. It's ultimately a human judging guilt and so if that human judgement is garbage so will the judgement of a computer be since the computer can only base its results and statistics on the judgements you feed it.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 9 Jul 2016 @ 11:35am

          Re: Re: Re:

          In the above example the bank loan example is also a good example of where computers can give more objective results because the result of a follow up question of whether or not someone defaulted on a loan is very objective to a bank. The bank can enter all this data into a computer and the computer can learn, with experience, what characteristics result in loan defaults.

          With a crime the problem is the follow up question involving whether or not someone actually committed a crime itself is often what's in question. and the computer can only base its results on these possibly subjective follow up questions which can be tainted with human bias.

          That's not to say there can't be objectivity to it. For instance a computer can look at a set of characteristics and determine the likelihood that drugs will be found on a specific property if searched. Perhaps objective results could be entered after a search into a computer and the computer can then use that data to help judge the merits of future searches. But even then that could be subject to bias in all sorts of ways. For instance if the police are more likely to search people of a certain race due to being racist and people of that particular race are more likely to have a specific character in common then the input data the computer is receiving itself may be tainted resulting in bias racist outputs even if the input questions don't directly address race. It's up to the police not to discriminate by race over who qualifies to have their data entered into a computer as a possible candidate to be searched, not to ignore the computer's recommendations based on race, not to conduct searches based on race so that the post search results the computer receives are not based on race, and to ensure that the searches the police do conduct (ie: on properties) are just as thorough regardless of the race of the person being searched. Garbage in garbage out.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 9 Jul 2016 @ 11:55am

            Re: Re: Re: Re:

            For instance it could be the case that people of a specific race, age, or gender are more likely to drive a specific type of car.

            If a bank is more likely to give loans to people of a specific race the computer's input data will be more limited to people of that race and its results may not do a good job reflecting people of different races.

            link to this | view in chronology ]

        • identicon
          Anonymous Coward, 10 Jul 2016 @ 12:01pm

          Re: Re: Re:

          (also with the doctor example above you need to be careful to watch out for trolls that may feed the computer garbage just to be nefarious).

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 10 Jul 2016 @ 1:12pm

            Re: Re: Re: Re: watch out for trolls

            I recall a while back stumbling on a random HTTP link generator that looped back onto itself with a delayed page load for each subsequent page. The purpose was to slow down inconsiderate web crawlers and fill them up with garbage. They called it a tar baby if I remember correctly.

            Seeding links to a number of these "tar baby's" into the HTTP referrer field using a plugin, could befoul quite a few micro-targeting databases.

            Of course Dr. Evil would insist that modifying the referrer field in your own software was a DOS. As if broadcasting data about your communications to unrelated third parties without your consent, was somehow consistent with natural law to begin with.

            link to this | view in chronology ]

        • icon
          btr1701 (profile), 11 Jul 2016 @ 2:14pm

          Re: Re: Re:

          > For instance say the person being tried is of a
          > particular race.

          When would that not be the case?

          link to this | view in chronology ]

      • identicon
        Anonymous Coward, 9 Jul 2016 @ 11:25am

        Re: Re:

        That being the case, certain scenarios would demand human review.

        link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2016 @ 11:16am

      Re:

      Are we really building system where even the program itself can't articulate its own criteria?

      There are only so many inputs into a system. If nothing else, a company should be able to explain what those inputs are - what the program could possibly be considering.


      Indeed we are. A classic example happened some years back with a vision system. The military wanted a system that would automatically identify NATO vs Soviet tanks. They built a neural network and fed it photographs of different tanks until it was able to successfully identify them correctly. Then they tried using the newly trained system "in the field" and it failed abysmally. So they went back to the data and tried to figure out what was happening. As things turned out, the problem was the photographs they used to train the system with. For NATO tanks since they had ready access to them, the photographs were nice and clear, well focused, etc. But the soviet tank photographs were what could be taken surreptitiously, fuzzy, not clear, etc. What the neural network had been learning was "clear focused images = NATO tanks, badly focused fuzzy images = soviet."

      link to this | view in chronology ]

      • icon
        Richard (profile), 10 Jul 2016 @ 12:58am

        Re: Re:

        Exactly!

        The statement "we don't know how the system works" is true of many new AI developments when they first break through. After about a year it stops being true but by that time the MSM have lost interest. Hence the public gets the impression that we don't understand how AI works - however most experts (talking in private) will admit that we DO understand how these things work - but the MSM is much more interested in you if you say that you don't.

        link to this | view in chronology ]

  • icon
    DB (profile), 9 Jul 2016 @ 9:28am

    Yes, we are certainly building systems where no one understands the criteria.

    That's considered one of the major advantages of Machine Learning ('ML'), Deep Neural Networks (DNN), etc. You don't have to pay programmers and experts for years to develop and test a system. You just train the network, do some automatic refinement of the structure, train a big more, and you magically can solve problems.

    It does work quite well, but the essence really is that no one understands what the structure is doing.

    If you know how FFTs work, think of one of the intermediate results. This is a very well understood network for calculating a result, yet most people couldn't explain what the intermediate result represents.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2016 @ 11:27am

      Re:

      Fast Fourier Transforms are well understood, I do not see them as a good example of the topic which I think is Artificial Intelligence and the subsequent results which may or may not be desirable.

      link to this | view in chronology ]

    • icon
      Richard (profile), 10 Jul 2016 @ 12:52am

      Re:

      yet most people couldn't explain what the intermediate result represents.

      Most people haven't studied much mathematics.

      Most people are unable to understand the techinal details of the modern world - this has been true for over 50 years.

      link to this | view in chronology ]

    • icon
      Richard (profile), 10 Jul 2016 @ 2:56am

      Re:

      It does work quite well, but the essence really is that no one understands what the structure is doing.

      Not so fast...

      A lot of work is being done to understand how these things work. Not least because they can go suddenly, spextacularly, wrong. Currently work is being done using the mathematics that is used also by general relativity, to understand the multi-dimensional spaces that underly these systems,

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Jul 2016 @ 10:10am

    Algorithm vs. Data Set.

    In micro targeted content, the data sets will and are becoming progressively larger. It is fair to say that increasingly effective manufacturing of buyer intent (ie. committing psychological rape of unsuspecting citizens) is as likely to be due to increases in data sample, as in advances in algorithmic analysis.

    Which means that the explanation could potentially be a basis for forcing disclosure of institutional surveillance by the corporate sector.

    In any case, I look forward to watching the related litigation. Who's bringing the popcorn and the rotten tomatoes?

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2016 @ 5:42pm

      Re: Algorithm vs. Data Set.

      "In micro targeted content, the data sets will and are becoming progressively larger."

      Computers are good at working with complete information. For example computers excel at games like chess.

      They aren't good at working with incomplete information, which is what reality is filled with. For instance computers struggle at games like poker when facing humans. Taking the logical move each time makes you predictable. Being too random could cost you. A predictable computer that never takes risk is one that people can exploit the predictable nature of. Never taking risk itself is risky. One that takes risk is one that may end up losing (it wouldn't be a risk otherwise).

      As data sets get larger and larger the amount of incomplete information decreases. Problems are that gathering information is a slow and expensive process and often times by the time you have gathered that information it may be too late to act. By then that information might be less relevant. The benefits of having that information might not be as great by then and a competitor that acted earlier and took risk by making assumptions and assuming right might have already overtaken you. With multiple competitors making different assumptions there is bound to be someone that made the right assumption. Even if they assumed wrong acting might have been a better choice than information gathering. Another problem is potentially not knowing the accuracy of that data.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 9 Jul 2016 @ 5:44pm

        Re: Re: Algorithm vs. Data Set.

        acting sooner might have been a better choice ... *

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 10 Jul 2016 @ 5:15am

        Re: Re: Algorithm vs. Data Set.

        You sound like an advocate for complete surveillance of everything. Vacuum it all up just because it is there, more is better - well, maybe not. What is the purpose of all this data analysis and to what nefarious end could it be used for, ie, what could possibly go wrong. These and other questions need to be answered.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 10 Jul 2016 @ 11:58am

          Re: Re: Re: Algorithm vs. Data Set.

          "Vacuum it all up just because it is there"

          That's not what I said at all.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Jul 2016 @ 5:10pm

    Public vs. private

    I don't see any comment about who this would apply to - governments, big companies, small companies, or individuals.

    If someone is denied gov. disability benefits, then that is one thing.

    If a person refuses to grant permission for someone to copy their photograph, then that is something else. Why do they grant permission to some and not to others?

    For example:
    IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF LOUISIANA Lisa Romain, Stacey Gibson, Joanika Davis, Schevelli Robertson, Jericho Macklin, Dameion Williams, Brian Trinchard, on Behalf of Themselves and All Others Similarly Situated,
    Plaintiffs,
    v.
    SUZY SONNIER, in her official capacity as Secretary of Louisiana Department of Children and Family Services,
    Defendant.

    ...
    Defendant’s threatened terminations of SNAP results from the DCFS’s pattern and practices...

    plaintiffs... challenge the defendant’s policies and practices of terminating individuals...

    questions of law and fact... whether defendant’s policies and practices...

    Defendant’s practices... were deficient because they
    failed to include in practice a fair system...

    link to this | view in chronology ]

  • icon
    Richard (profile), 9 Jul 2016 @ 11:56pm

    alphago

    AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end.

    Actually it didn't. If anything it was Lee Sedol who played like that - if you look at (9 dan) Michael Redmond's analysis of the games you will see that in fact Alphago made quite reasonable moves.

    To be fair, the earlier program "Mogo" that firs beat a pro (with a big handicap) some years ago did play strange moves, but things have moved on since then.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 10 Jul 2016 @ 7:47am

    Magic 8-Ball

    How do you explain the decisions of a Magic 8-Ball?

    "Hey, it's not our fault, we wouldn't dream of being discriminatory. Take it up with the Magic 8-Ball."

    link to this | view in chronology ]

    • icon
      Coyne Tibbets (profile), 10 Jul 2016 @ 1:02pm

      Re: Magic 8-Ball

      No algorithm.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 10 Jul 2016 @ 3:11pm

        Re: Re: Magic 8-Ball

        No algorithm.

        I take it you've never seen one of the many Magic 8-Ball programs.

        link to this | view in chronology ]

        • icon
          Coyne Tibbets (profile), 11 Jul 2016 @ 10:56pm

          Re: Re: Re: Magic 8-Ball

          I didn't see anything about a "Magic 8-Ball program", just a "Magic 8-Ball".

          But now that you bring it up, yep, they better show that algorithm. It might be choosing answers based on race.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 10 Jul 2016 @ 11:13am

    Who would need to understand these decisions? An expert programmer? A mathmatician? Or is it every person out there? If people like my parents needs to understand the decisions a computer makes, there will never be a legal system again.
    It is hard enough to explain to some people why the "magic" computer doesn't print suddenly or why restarting is an essential step in IT problem solving, because they don't need or want to know that there are 100 different services working together under the surface.
    I wonder how it will go when possibly thousands of criteria will have to be explained so everyone can understand.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 10 Jul 2016 @ 3:37pm

      Re:

      or why restarting is an essential step in IT problem solving
      The only reason restarting the computer is an essential part of problem solving is that it is the easiest way to cut the Gordian knot. We don't want to or don't have the resources or are too lazy to actually investigate why, we just want the problem gone (until next time).

      link to this | view in chronology ]

  • icon
    btr1701 (profile), 11 Jul 2016 @ 2:18pm

    AI

    > a strict "right to an explanation" seems like it may
    > actually create limits on machine learning and AI in
    > Europe -- potentially hamstringing projects by requiring
    > them to be limited to levels of human understanding

    "Skynet Averted by Robust EU Bureaucracy"

    link to this | view in chronology ]

  • identicon
    Mark Allen, 12 Jul 2016 @ 1:05pm

    Right To An Explanation is Reasonable and Possible

    As an inventor of Progress Corticon, the leading rules engine for automated decision processing, I find the right for explanation not only rational, but entirely feasible. Best of breed rules engines support this today by providing audit trails that fully explain the results of automated decision processing.

    As the author explains, it is feasible that parts of complex decisioning logic could be represented as algorithms that are not easy to understand. That said, for regulated decision, such algorithmic logic must be constrained by clearly understandable business rules, derived from policy or legislation.

    An example is credit determination. Legislation requires non-discrimination based upon race, religion and ethnicity. Within the constraints of this legislation, an automated decision service may also apply predictive algorithms that determine propensity to default. All of this could be explained quite clearly in an audit trail of the decision result. Again, this is not only a rational request, but one that can be supported today by best-of-breed technology.

    link to this | view in chronology ]

  • identicon
    Thomas Humphries, 19 Jan 2017 @ 2:41pm

    An explanation for every situation.

    here's an all-purpose explanation: "computer says no..."

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.