Warner Bros. DMCAs Insanely Awesome Recreation Of Blade Runner By Artificial Intelligence

from the oh-the-irony dept

I'm going to dispense with any introduction here, because the meat of this story is amazing and interesting in many different ways, so we'll jump right in. Blade Runner, the film based off of Philip K. Dick's classic novel, Do Androids Dream Of Electric Sheep, is a film classic in every last sense of the word. If you haven't seen it, you absolutely should. Also, if you indeed haven't seen the movie, you've watched at least one less film than an amazing artificial intelligence software developed by Terrance Broad, a London-based researcher working on his advanced degree in creative computing.

His dissertation, "Autoencoding Video Frames," sounds straightforwardly boring, until you realize that it's the key to the weird tangle of remix culture, internet copyright issues, and artificial intelligence that led Warner Bros. to file its takedown notice in the first place. Broad's goal was to apply "deep learning" — a fundamental piece of artificial intelligence that uses algorithmic machine learning — to video; he wanted to discover what kinds of creations a rudimentary form of AI might be able to generate when it was "taught" to understand real video data.

The practical application of Broad's research was to instruct the artificial neural network, an AI that is something of a simulacrum of the human brain or thought process, to watch Blade Runner several times and attempt to reconstruct its impression of what it had seen. In other words, the original film is the interpretation of the film through human eyes, while Broad's AI reconstructed what is essentially what the film looks like through the eyes of an artificial intelligence. And if that hasn't gotten your heartrate up a bit, then you and I live on entirely different planets.

The AI first had to learn to discern footage from Blade Runner from other footage. Once it had done that, Broad has the AI "watch" numerical representations of frames from the film and then attempt to reconstruct them into a visual medium.

Once it had taught itself to recognize the Blade Runner data, the encoder reduced each frame of the film to a 200-digit representation of itself and reconstructed those 200 digits into a new frame intended to match the original. (Broad chose a small file size, which contributes to the blurriness of the reconstruction in the images and videos I've included in this story.) Finally, Broad had the encoder resequence the reconstructed frames to match the order of the original film.

Broad repeated the "learning" process a total of six times for both films, each time tweaking the algorithm he used to help the machine get smarter about deciding how to read the assembled data. Here's what selected frames from Blade Runner looked like to the encoder after the sixth training. Below we see two columns of before/after shots. On the left is the original frame; on the right is the encoder's interpretation of the frame.

Below is video of the original film and the reconstruction side by side.


The blur and image issues are due in part to the compression of what the AI was asked to learn from and its response in reconstructing it. Regardless, the output product is amazingly accurate. The irony of having this AI learn to do this via Blade Runner specifically was intentional, of course. The irony of one unintended response to this project was not.

Last week, Warner Bros. issued a DMCA takedown notice to the video streaming website Vimeo. The notice concerned a pretty standard list of illegally uploaded files from media properties Warner owns the copyright to — including episodes of Friends and Pretty Little Liars, as well as two uploads featuring footage from the Ridley Scott movie Blade Runner.

Just a routine example of copyright infringement, right? Not exactly. Warner Bros. had just made a fascinating mistake. Some of the Blade Runner footage — which Warner has since reinstated — wasn't actually Blade Runner footage. Or, rather, it was, but not in any form the world had ever seen.

Yes, Warner Bros. DMCA'd the video of this project. To its credit, it later rescinded the DMCA request, but this project has fascinating implications for the copyright process and its collision with this kind of work. For instance, if the automatic crawlers looking for film footage snagged this automatically, is that essentially punishing Broad's AI for doing its task so accurately that its interpretation of the film so closely matched the original? And, at a more basic level, is the output of the AI even a reproduction copy of the original film, subjecting it to the DMCA process, or is it some kind of new "work" entirely? As the Vox post notes:

In other words: Warner had just DMCA'd an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because it couldn't distinguish between the simulation and the real thing.

Other comments have made the point that if the video is simply the visual interpretation of the "thoughts" of an artificial intelligence, then how is that copyrightable? One can't copyright thoughts, after all, only the expression of those thoughts. If these are the thoughts of an AI, are they subject to copyright by virtue of the AI not being "human?" And I'm just going to totally leave alone the obvious subsequent question as to how we're going to define human, because, hell, that's the entire point of Dick's original work.

Broad noted to Vox that the way he used Blade Runner in his AI research doesn't exactly constitute a cut-and-dried legal case: "No one has ever made a video like this before, so I guess there is no precedent for this and no legal definition of whether these reconstructed videos are an infringement of copyright."

It's an as yet unanswered question, but one which will need to be tackled. Video encoding and delivery, like many other currently human tasks, is ripe for the employment of AI of the kind that Broad is trying to develop. The closer that software gets to becoming wetware, questions of copyright will have to be answered, lest they get in the way of progress.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, automation, blade runner, dmca, simulation, terrance broad
Companies: warner bros.


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 3 Jun 2016 @ 12:56pm

    eh...

    link to this | view in chronology ]

  • identicon
    Ian, 3 Jun 2016 @ 1:07pm

    This sounds an awful lot like the CBS "remastered-works are entitled to new copyright" scenario, only the remastering was done by a third (non-human) party.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jun 2016 @ 1:09pm

    In a world...

    ... where remastering sound recordings grants them a new copyright...

    One lone AI strives to redefine what it means to create...

    Come soon to courtrooms near you!

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 3 Jun 2016 @ 1:26pm

      Re: In a world...

      This film, like the wind done gone, is obviously the same story told from a different viewpoint.

      link to this | view in chronology ]

  • identicon
    Quiet Lurcker, 3 Jun 2016 @ 1:10pm

    Reminds me of a song I heard once, something about

    I fought the (fair use) law, and the law won....

    link to this | view in chronology ]

  • icon
    Ryunosuke (profile), 3 Jun 2016 @ 1:24pm

    actually...

    I believe star trek touched on this during the whole "Is Data a lifeform" Trial episode.

    link to this | view in chronology ]

  • identicon
    TMC, 3 Jun 2016 @ 1:27pm

    AI thoughts would be copyrightable under the current scheme. Original creativity fixed in a tangible medium of expression? AI thoughts are directly expressive, unlike human thoughts, that can only be described indirectly.

    Yeah, that's a weird outcome.

    link to this | view in chronology ]

  • icon
    DannyB (profile), 3 Jun 2016 @ 1:38pm

    Some thoughts

    If the AI recreation of the film is in held to be a derivative work of the original film, then similarly, your own personal memory of the film is similarly a derivative work of the film.

    If Warner Bros. has rights to the AI's memory of the film, then it should also have similar rights to your own memories of the film and every instance of each recollection you ever have of the film.

    To avoid protracted negotiations, congress should establish a fixed and standard license cost associated with every recollection of copyrighted materials.

    To promote the useful arts and sciences.

    Hey, I'll DMCA that memory of yours! I'll censor it before it even becomes audible or type keyboarded speech!

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 3 Jun 2016 @ 6:05pm

      Re: Some thoughts

      Your memory doesn't meet the fixation requirement. An AI's memory probably would. This AI's memory definitely would.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 3 Jun 2016 @ 7:09pm

        Re: Re: Some thoughts

        Your memory doesn't meet the fixation requirement.

        Oh, really? Care to explain exactly why not?

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 3 Jun 2016 @ 7:10pm

        Re: Re: Some thoughts

        "Your memory doesn't meet the fixation requirement."

        Umm, why not?

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 4 Jun 2016 @ 9:00pm

          Re: Re: Re: Some thoughts

          Because we don't know how memory is stored? Without the knowing, we cannot say that the memory has been fixed an a particular form.

          link to this | view in chronology ]

          • icon
            Coyoty (profile), 4 Jun 2016 @ 10:07pm

            Re: Re: Re: Re: Some thoughts

            We don't know how the AI's memory is stored. We can dump how it's encoded, but neural nets don't encode reproducibly. The relationships between subjects are weighted and established depending on the AI's experience, and its pathways will be different from any other artificial or natural intelligence.

            link to this | view in chronology ]

        • identicon
          Anonymous Coward, 6 Jun 2016 @ 8:40am

          Re: Re: Re: Some thoughts

          http://www.quizlaw.com/copyrights/what_is_fixation.php

          Fixation is required to be done in a tangible medium, a tangible medium is one that can be viewed and copied. At present, human memory cannot be copied. There's an argument that it can't be viewed either. Even if we accept that recollections of copyrighted materials are 'fixed', there is broad latitude in the law for personal use.

          Yes, this is all fucking insane.

          link to this | view in chronology ]

      • icon
        PaulT (profile), 6 Jun 2016 @ 2:47am

        Re: Re: Some thoughts

        "Your memory doesn't meet the fixation requirement."

        Depends on what you mean by "fixation". Would someone with a truly photographic memory violate this requirement because they have a version they won't forget?

        If you mean nobody else can access it - what about when we invent a way to stored personal memories on an external source? Just because we haven't yet invented the device that can visualise and store memories and experiences a la Videodrome/Brainstorm/Strange Days, that doesn't mean it will never exist and so the question is worth asking.

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jun 2016 @ 1:58pm

    I'm sure an entertainment lawyer would argue that showing the film to an AI constitutes an unauthorized "public performance", hence violating copyright.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jun 2016 @ 1:58pm

    "No one has ever made a video like this before, so I guess there is no precedent for this and no legal definition of whether these reconstructed videos are an infringement of copyright."

    It's new, therefor it's infringing according to Hollywood.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jun 2016 @ 2:12pm

    There's likely an extra layer to the Irony

    Warner had just DMCA'd an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because it couldn't distinguish between the simulation and the real thing.


    Let me rewrite that:

    Warner's DMCA reporting software just automatically recognized and submitted a DMCA request on an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because the AI couldn't distinguish between the simulation and the real thing.

    Once humans got involved, they saw the difference and rescinded the request.

    Think about that for a moment....

    link to this | view in chronology ]

  • icon
    Anonymous Anonymous Coward (profile), 3 Jun 2016 @ 2:12pm

    Famous for being famous...no more (we sincerely hope)

    I see this as one step closer to getting rid of the narcissistic exhibitionists know as actors. On one hand the world will be a better place, especially if paparazzi and media devoted to being devoted to 'stars' die a horrible, costly death.

    On the other hand, will writers and AI actually make better movies than directors reiterating the Brothers Grimm and 1950's era comic books? I think they might, after a while. The initial problem I see is the incentive to use a random number generator rather than creative processes in the production might produce several generations of movies before they learn. But we have several generations of Grimm and comic movies and they don't appear to be going away.

    Hmmm, should we actually be blaming the audience rather than the producers? Where's my mirror?

    link to this | view in chronology ]

  • icon
    Michael Ho (profile), 3 Jun 2016 @ 2:19pm

    Sounds like this is the video version of Bluebeat to me..?

    Okay, Bluebeat didn't use "machine learning" or AI, but they could have made their technology sound a bit more legit... and they'd be in about the same position, right? (Just with music, instead of video)

    https://www.techdirt.com/articles/20091105/1642426817.shtml

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jun 2016 @ 2:33pm

    So what am I missing here?

    From what I understand the guy took the movie, changed the viewing format to a much lower quality, with a filter to make effectively nebulas instead of rain for background movement, and then published it as if the AI reinterpreted the movie.

    The AI was created and programmed by him. He just made a program that did a re-encoding. The only auto part I see is that he didn't re-encode the frames manually.

    Now if the AI had taken the book and created those scenes from the words in the book, that would be impressive.

    link to this | view in chronology ]

    • icon
      Dark Helmet (profile), 3 Jun 2016 @ 2:51pm

      Re:

      "From what I understand the guy took the movie, changed the viewing format to a much lower quality, with a filter to make effectively nebulas instead of rain for background movement, and then published it as if the AI reinterpreted the movie."

      I do not believe this is correct. Instead, Broad took the individual frames of the movie, reduced those frames to a numerical value of high compression (as opposed to having the AI view low-res frames by "sight"), and then had the AI use the numerical values to reconstruct the movie frame by frame. Think of it like someone translating the bible into a numerical code and then having someone in China re-translate it to Chinese from the numbers, and then you compare the two for accuracy.

      That the machine got things so correct is amazing.

      link to this | view in chronology ]

      • icon
        Yakko Warner (profile), 3 Jun 2016 @ 5:04pm

        Re: Re:

        Isn't that essentially what recording, compressing, and sending a file over a computer does? I mean, the way you just described it sounds like what would happen if I typed the bible into a word processor, saved it, and emailed it to someone in China.

        link to this | view in chronology ]

        • icon
          Anonymous Anonymous Coward (profile), 3 Jun 2016 @ 5:18pm

          Re: Re: Re:

          Well, there are a number of versions of the Bible out there, it would be hard to claim that your version wasn't the original, but it might be a copy of one under copyright. Since the 'original' was written by scribes there might be difficulty proving...well anything.

          Now, in China, they might not like the bible floating around, rumor has it that they don't like Christianity much. Might have to do with fire breathing missionaries from the 19th century.

          Now if you took the ideas of the bible, eliminated the ones that are really, really bad (i.e. support slavery and murder and capital punishment and etc.) and let AI create a new 'Good Book' that even Shinto, or Hindu, or Amer Indian, or Buddhist, etc. might sign onto, then you might have something. Maybe even something China would not object to.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 4 Jun 2016 @ 9:08pm

            Re: Re: Re: Re:

            Now, in China, they might not like the bible floating around, rumor has it that they don't like Christianity much. Might have to do with fire breathing missionaries from the 19th century.
            The government of China doesn't like Christians because they cannot be controlled by the Party Line. The government recognises that Christians stubbornly refuse to toe the "Enlightened Principles of the Communist Party". Hence, they must be dealt with in any viable way, including imprisonment, death, making paupers of them, etc. The government does seem to have noticed that that even these forms of control are not effective. The government applies this to any group that they cannot control.

            link to this | view in chronology ]

        • identicon
          Anonymous Coward, 3 Jun 2016 @ 7:15pm

          Re: Re: Re:

          Isn't that essentially what recording, compressing, and sending a file over a computer does?

          Yep.

          link to this | view in chronology ]

        • identicon
          quaquaquaqua, 11 Oct 2016 @ 6:20pm

          Re: Re: Re:

          People have called autoencoders "the most expensive times 1" because, when they work well it is as if you haven't done anything at all.

          The key though is in the neural networks architecture. Say you want to encode an color image that is 100 by 100 pixels. That means the input layer of the network would have 100 x 100 x 3 input neurons (the number of pixels times the three color channels). The output layer will also have 100 x 100 x 3 neurons. If the thing works there will be no difference between the input and output neurons -- or the difference will be minimized.

          Why is this different than copying? The difference is that somewhere in the middle, between the input and output layer the network has a bottleneck, where the number of neurons is reduced to only a few. After training the network you can view those few neurons as a small handful of variables that can optimally represent the image. Instead of needing 100 x 100 x 3 variables to represent the image you could use something as small as 12 variables (for example).

          One application is of course compression. But you can use the same principle for other purposes such as:
          denoising
          upscaling: https://github.com/nagadomi/waifu2x
          generative models (by slightly modifying the variables).

          non technical explanation at 43 minutes:
          http://ml4a.github.io/classes/itp-S16/06/#

          link to this | view in chronology ]

      • identicon
        Anonymous Coward, 5 Jun 2016 @ 10:07am

        Re: Re:

        I think his point is that the conversion to numerical values is essentially a different method of encoding and the AI output would simply qualify as a format conversion algorithm re-encoding it back to a common format lossy as it may be. If the AI had decided of it's own accord to "watch" the film and output it then we would be onto something.

        link to this | view in chronology ]

  • icon
    Stan (profile), 3 Jun 2016 @ 5:21pm

    Can you copyright an AI's thoughts?

    ' If these are the thoughts of an AI, are they subject to copyright by virtue of the AI not being "human?" '

    Does it make a difference if the AI was coded by a human or our favorite selfie-taking monkey?

    link to this | view in chronology ]

    • icon
      Anonymous Anonymous Coward (profile), 3 Jun 2016 @ 5:32pm

      Re: Can you copyright an AI's thoughts?

      Interesting question. In this 'everything has to be owned' society, one might say the programmer owns it. But while the programmer might have set up the AI, there were other inputs that had to do with the creation of, well, whatever. That means that whoever created the inputs that the AI learned from own a piece...or do they? So far, copyright hasn't recognized how, say Beethoven influenced the Beatles or the Brothers Grimm influenced Disney, so they might have an argument, under current circumstances.

      I thank the makers that I am too old to expect to live through what will be a very very very messy and ill-reasoned series of court cases this will create.

      On the other hand, the monkey didn't get the copyright, not a natural person.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jun 2016 @ 7:18pm

    Amateurs!

    My super-duper AI is so good that you can't tell the output from the original. It might look like a bit for bit copy, but that's just because my AI is soooo good.

    link to this | view in chronology ]

  • identicon
    tp, 3 Jun 2016 @ 8:20pm

    It's obviously Derived Work.

    Duh, if he used blade runner movie as his input data, obviously it's a derived work. And you'll need a license from the authors of the movie, if you publish your derived work. No amount of "we changed the contents of the pixels to hide that it's coming from copyrighted work" is going to help with that.

    AI does not solve the fundamental problem that creating large and complex digital works takes significant amount of time and money. That effort is protected by copyright laws, which means this kind of remixes are bordering the limits of what is allowed.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 6 Jun 2016 @ 2:07am

      Re: It's obviously Derived Work.

      You talk about unauthorized copying... What about the copy of the movie in the viewer's brain ?

      First of all you do realize this is a process that mimics what happens to the movie inside a fleshy brain, right ?

      The one huge difference is a human is looking at a screen (data encoded as light) while the AI is looking at the raw picture data.

      This kind of stuff is why we need to clarify exactly what "copying" means in order to avoid BS like this ending in court.

      link to this | view in chronology ]

  • identicon
    Rekrul, 3 Jun 2016 @ 10:31pm

    The actual description of what the AI did is interesting, but the video is pretty underwhelming. I mean, even if it was created by an AI that learned to recognize video, it still looks like he just ran the video through a processing program with a low-res filter.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jun 2016 @ 10:34pm

    "To its credit, it later rescinded the DMCA request..."

    NO! Stop that! No one any longer gets any credit for undoing avoidable copyright asshattery.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 5 Jun 2016 @ 6:17am

    Easy answer to the copyright question

    If an AI can hold copyright then so can a monkey taking selfies. Neither are human and only humans can hold copyright.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 5 Jun 2016 @ 9:46am

      Re: Easy answer to the copyright question

      But can an artificial monkey?

      link to this | view in chronology ]

  • identicon
    Esoth, 5 Jun 2016 @ 11:52pm

    The thing that struck me is how washed out the blurry reprint is. To my eyes the convenient explanation for the blurred images doesn't hide the way it underscores how dumb and tone deaf the replication is. The original remains alive with meaning and nuance while the copy is just a replication, entirely dependent on the visual and aural palate of the original for its impact. Would the AI know the difference if it were recreating a dark-themed deodorant commercial?

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 6 Jun 2016 @ 2:12am

      Re:

      The AI doesn't "know" anything, this is just a mimic of a potential process of encoding/decoding that would hypothetically happen in a brain. You'd need a lot more than this to have an AI that actually "understands" what and can explain what the movie was about.

      Technically all of this is pointless anyway. It's not like robots have a biochemical need to be entertained. Let's just go back to living in caves! Ugg want scratch balls... :D :P

      link to this | view in chronology ]

  • identicon
    Wowza, 6 Jun 2016 @ 7:32pm

    Hella compression

    "the encoder reduced each frame of the film to a 200-digit representation of itself"

    200 digits, so assuming 8 bytes per digit = 1,600 bytes for a single frame... I don't know what quality the video was but assuming it's DVD quality 720x480 (NTSC)=345,600 bytes.

    Despite the bluriness that's one hella compression algorithm he's got there, approx ratio of 1:216!!

    link to this | view in chronology ]

  • identicon
    Dolores Catherino, 2 Dec 2016 @ 9:57am

    AI Music and Copyright

    With respect to music, a second issue, beyond the status of AI as an author/creator is in play here.

    In understanding music as an abstract language, it becomes apparent that there are very limited melodic and harmonic combinations possible within a 7 note (modal), 5 note (pentatonic), or 12 note (chromatic) system. An analogy could be the limited number of words and sentences possible with a 5, 7, or 12 letter alphabet.

    Within this limited musical system, every possible melodic and harmonic fragment/combination has been conceived before, used in varying contexts - historically and stylistically. Ever expanding databases of public domain musical fragments may eventually be catalogued, readily accessible and used in legal defense against claims of copyright infringement. Uniqueness of integrated musical context may ultimately be the deciding factor in determining ownership/infringement vs. public domain variation.

    With the existence of immense databases of worldwide historical (public domain) musical fragments, it should not be difficult to find prior examples of any basic melodic/harmonic fragments being used in the generative variation processes of AI, preexisting in the public domain.

    “Copyright law excludes protection for works that are produced by purely mechanized or random processes, so the question comes down to the extent of human involvement, and whether a person can be credited with any of the creative processes that produced the work.” I wonder how the court would define ‘creative involvement’. Would this idea be extended to creative editing and arrangement of randomly generated AI fragments?

    And in practice, will a distinction emerge between human creative involvement as musical compositor (constructing a final sonic 'image' by combining layers of previously-created/computer generated material) vs composer - as poles in a 'creative continuum' between arranging/editing and innovative exploration/expression?

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.