Warner Bros. DMCAs Insanely Awesome Recreation Of Blade Runner By Artificial Intelligence
from the oh-the-irony dept
I'm going to dispense with any introduction here, because the meat of this story is amazing and interesting in many different ways, so we'll jump right in. Blade Runner, the film based off of Philip K. Dick's classic novel, Do Androids Dream Of Electric Sheep, is a film classic in every last sense of the word. If you haven't seen it, you absolutely should. Also, if you indeed haven't seen the movie, you've watched at least one less film than an amazing artificial intelligence software developed by Terrance Broad, a London-based researcher working on his advanced degree in creative computing.
His dissertation, "Autoencoding Video Frames," sounds straightforwardly boring, until you realize that it's the key to the weird tangle of remix culture, internet copyright issues, and artificial intelligence that led Warner Bros. to file its takedown notice in the first place. Broad's goal was to apply "deep learning" — a fundamental piece of artificial intelligence that uses algorithmic machine learning — to video; he wanted to discover what kinds of creations a rudimentary form of AI might be able to generate when it was "taught" to understand real video data.
The practical application of Broad's research was to instruct the artificial neural network, an AI that is something of a simulacrum of the human brain or thought process, to watch Blade Runner several times and attempt to reconstruct its impression of what it had seen. In other words, the original film is the interpretation of the film through human eyes, while Broad's AI reconstructed what is essentially what the film looks like through the eyes of an artificial intelligence. And if that hasn't gotten your heartrate up a bit, then you and I live on entirely different planets.
The AI first had to learn to discern footage from Blade Runner from other footage. Once it had done that, Broad has the AI "watch" numerical representations of frames from the film and then attempt to reconstruct them into a visual medium.
Once it had taught itself to recognize the Blade Runner data, the encoder reduced each frame of the film to a 200-digit representation of itself and reconstructed those 200 digits into a new frame intended to match the original. (Broad chose a small file size, which contributes to the blurriness of the reconstruction in the images and videos I've included in this story.) Finally, Broad had the encoder resequence the reconstructed frames to match the order of the original film.
Broad repeated the "learning" process a total of six times for both films, each time tweaking the algorithm he used to help the machine get smarter about deciding how to read the assembled data. Here's what selected frames from Blade Runner looked like to the encoder after the sixth training. Below we see two columns of before/after shots. On the left is the original frame; on the right is the encoder's interpretation of the frame.
Below is video of the original film and the reconstruction side by side.
The blur and image issues are due in part to the compression of what the AI was asked to learn from and its response in reconstructing it. Regardless, the output product is amazingly accurate. The irony of having this AI learn to do this via Blade Runner specifically was intentional, of course. The irony of one unintended response to this project was not.
Last week, Warner Bros. issued a DMCA takedown notice to the video streaming website Vimeo. The notice concerned a pretty standard list of illegally uploaded files from media properties Warner owns the copyright to — including episodes of Friends and Pretty Little Liars, as well as two uploads featuring footage from the Ridley Scott movie Blade Runner.
Just a routine example of copyright infringement, right? Not exactly. Warner Bros. had just made a fascinating mistake. Some of the Blade Runner footage — which Warner has since reinstated — wasn't actually Blade Runner footage. Or, rather, it was, but not in any form the world had ever seen.
Yes, Warner Bros. DMCA'd the video of this project. To its credit, it later rescinded the DMCA request, but this project has fascinating implications for the copyright process and its collision with this kind of work. For instance, if the automatic crawlers looking for film footage snagged this automatically, is that essentially punishing Broad's AI for doing its task so accurately that its interpretation of the film so closely matched the original? And, at a more basic level, is the output of the AI even a reproduction copy of the original film, subjecting it to the DMCA process, or is it some kind of new "work" entirely? As the Vox post notes:
In other words: Warner had just DMCA'd an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because it couldn't distinguish between the simulation and the real thing.
Other comments have made the point that if the video is simply the visual interpretation of the "thoughts" of an artificial intelligence, then how is that copyrightable? One can't copyright thoughts, after all, only the expression of those thoughts. If these are the thoughts of an AI, are they subject to copyright by virtue of the AI not being "human?" And I'm just going to totally leave alone the obvious subsequent question as to how we're going to define human, because, hell, that's the entire point of Dick's original work.
Broad noted to Vox that the way he used Blade Runner in his AI research doesn't exactly constitute a cut-and-dried legal case: "No one has ever made a video like this before, so I guess there is no precedent for this and no legal definition of whether these reconstructed videos are an infringement of copyright."
It's an as yet unanswered question, but one which will need to be tackled. Video encoding and delivery, like many other currently human tasks, is ripe for the employment of AI of the kind that Broad is trying to develop. The closer that software gets to becoming wetware, questions of copyright will have to be answered, lest they get in the way of progress.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, automation, blade runner, dmca, simulation, terrance broad
Companies: warner bros.
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
[ link to this | view in chronology ]
In a world...
One lone AI strives to redefine what it means to create...
Come soon to courtrooms near you!
[ link to this | view in chronology ]
Re: In a world...
[ link to this | view in chronology ]
I fought the (fair use) law, and the law won....
[ link to this | view in chronology ]
actually...
[ link to this | view in chronology ]
Yeah, that's a weird outcome.
[ link to this | view in chronology ]
Some thoughts
If Warner Bros. has rights to the AI's memory of the film, then it should also have similar rights to your own memories of the film and every instance of each recollection you ever have of the film.
To avoid protracted negotiations, congress should establish a fixed and standard license cost associated with every recollection of copyrighted materials.
To promote the useful arts and sciences.
Hey, I'll DMCA that memory of yours! I'll censor it before it even becomes audible or type keyboarded speech!
[ link to this | view in chronology ]
Re: Some thoughts
[ link to this | view in chronology ]
Re: Re: Some thoughts
Oh, really? Care to explain exactly why not?
[ link to this | view in chronology ]
Re: Re: Some thoughts
Umm, why not?
[ link to this | view in chronology ]
Re: Re: Re: Some thoughts
[ link to this | view in chronology ]
Re: Re: Re: Re: Some thoughts
[ link to this | view in chronology ]
Re: Re: Re: Some thoughts
Fixation is required to be done in a tangible medium, a tangible medium is one that can be viewed and copied. At present, human memory cannot be copied. There's an argument that it can't be viewed either. Even if we accept that recollections of copyrighted materials are 'fixed', there is broad latitude in the law for personal use.
Yes, this is all fucking insane.
[ link to this | view in chronology ]
Re: Re: Some thoughts
Depends on what you mean by "fixation". Would someone with a truly photographic memory violate this requirement because they have a version they won't forget?
If you mean nobody else can access it - what about when we invent a way to stored personal memories on an external source? Just because we haven't yet invented the device that can visualise and store memories and experiences a la Videodrome/Brainstorm/Strange Days, that doesn't mean it will never exist and so the question is worth asking.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
It's new, therefor it's infringing according to Hollywood.
[ link to this | view in chronology ]
There's likely an extra layer to the Irony
Let me rewrite that:
Warner's DMCA reporting software just automatically recognized and submitted a DMCA request on an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because the AI couldn't distinguish between the simulation and the real thing.
Once humans got involved, they saw the difference and rescinded the request.
Think about that for a moment....
[ link to this | view in chronology ]
Famous for being famous...no more (we sincerely hope)
On the other hand, will writers and AI actually make better movies than directors reiterating the Brothers Grimm and 1950's era comic books? I think they might, after a while. The initial problem I see is the incentive to use a random number generator rather than creative processes in the production might produce several generations of movies before they learn. But we have several generations of Grimm and comic movies and they don't appear to be going away.
Hmmm, should we actually be blaming the audience rather than the producers? Where's my mirror?
[ link to this | view in chronology ]
Sounds like this is the video version of Bluebeat to me..?
https://www.techdirt.com/articles/20091105/1642426817.shtml
[ link to this | view in chronology ]
From what I understand the guy took the movie, changed the viewing format to a much lower quality, with a filter to make effectively nebulas instead of rain for background movement, and then published it as if the AI reinterpreted the movie.
The AI was created and programmed by him. He just made a program that did a re-encoding. The only auto part I see is that he didn't re-encode the frames manually.
Now if the AI had taken the book and created those scenes from the words in the book, that would be impressive.
[ link to this | view in chronology ]
Re:
I do not believe this is correct. Instead, Broad took the individual frames of the movie, reduced those frames to a numerical value of high compression (as opposed to having the AI view low-res frames by "sight"), and then had the AI use the numerical values to reconstruct the movie frame by frame. Think of it like someone translating the bible into a numerical code and then having someone in China re-translate it to Chinese from the numbers, and then you compare the two for accuracy.
That the machine got things so correct is amazing.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Now, in China, they might not like the bible floating around, rumor has it that they don't like Christianity much. Might have to do with fire breathing missionaries from the 19th century.
Now if you took the ideas of the bible, eliminated the ones that are really, really bad (i.e. support slavery and murder and capital punishment and etc.) and let AI create a new 'Good Book' that even Shinto, or Hindu, or Amer Indian, or Buddhist, etc. might sign onto, then you might have something. Maybe even something China would not object to.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Yep.
[ link to this | view in chronology ]
Re: Re: Re:
The key though is in the neural networks architecture. Say you want to encode an color image that is 100 by 100 pixels. That means the input layer of the network would have 100 x 100 x 3 input neurons (the number of pixels times the three color channels). The output layer will also have 100 x 100 x 3 neurons. If the thing works there will be no difference between the input and output neurons -- or the difference will be minimized.
Why is this different than copying? The difference is that somewhere in the middle, between the input and output layer the network has a bottleneck, where the number of neurons is reduced to only a few. After training the network you can view those few neurons as a small handful of variables that can optimally represent the image. Instead of needing 100 x 100 x 3 variables to represent the image you could use something as small as 12 variables (for example).
One application is of course compression. But you can use the same principle for other purposes such as:
denoising
upscaling: https://github.com/nagadomi/waifu2x
generative models (by slightly modifying the variables).
non technical explanation at 43 minutes:
http://ml4a.github.io/classes/itp-S16/06/#
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Can you copyright an AI's thoughts?
Does it make a difference if the AI was coded by a human or our favorite selfie-taking monkey?
[ link to this | view in chronology ]
Re: Can you copyright an AI's thoughts?
I thank the makers that I am too old to expect to live through what will be a very very very messy and ill-reasoned series of court cases this will create.
On the other hand, the monkey didn't get the copyright, not a natural person.
[ link to this | view in chronology ]
Amateurs!
[ link to this | view in chronology ]
It's obviously Derived Work.
AI does not solve the fundamental problem that creating large and complex digital works takes significant amount of time and money. That effort is protected by copyright laws, which means this kind of remixes are bordering the limits of what is allowed.
[ link to this | view in chronology ]
Re: It's obviously Derived Work.
First of all you do realize this is a process that mimics what happens to the movie inside a fleshy brain, right ?
The one huge difference is a human is looking at a screen (data encoded as light) while the AI is looking at the raw picture data.
This kind of stuff is why we need to clarify exactly what "copying" means in order to avoid BS like this ending in court.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
"To its credit, it later rescinded the DMCA request..."
[ link to this | view in chronology ]
Easy answer to the copyright question
[ link to this | view in chronology ]
Re: Easy answer to the copyright question
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Technically all of this is pointless anyway. It's not like robots have a biochemical need to be entertained. Let's just go back to living in caves! Ugg want scratch balls... :D :P
[ link to this | view in chronology ]
Hella compression
200 digits, so assuming 8 bytes per digit = 1,600 bytes for a single frame... I don't know what quality the video was but assuming it's DVD quality 720x480 (NTSC)=345,600 bytes.
Despite the bluriness that's one hella compression algorithm he's got there, approx ratio of 1:216!!
[ link to this | view in chronology ]
AI Music and Copyright
In understanding music as an abstract language, it becomes apparent that there are very limited melodic and harmonic combinations possible within a 7 note (modal), 5 note (pentatonic), or 12 note (chromatic) system. An analogy could be the limited number of words and sentences possible with a 5, 7, or 12 letter alphabet.
Within this limited musical system, every possible melodic and harmonic fragment/combination has been conceived before, used in varying contexts - historically and stylistically. Ever expanding databases of public domain musical fragments may eventually be catalogued, readily accessible and used in legal defense against claims of copyright infringement. Uniqueness of integrated musical context may ultimately be the deciding factor in determining ownership/infringement vs. public domain variation.
With the existence of immense databases of worldwide historical (public domain) musical fragments, it should not be difficult to find prior examples of any basic melodic/harmonic fragments being used in the generative variation processes of AI, preexisting in the public domain.
“Copyright law excludes protection for works that are produced by purely mechanized or random processes, so the question comes down to the extent of human involvement, and whether a person can be credited with any of the creative processes that produced the work.” I wonder how the court would define ‘creative involvement’. Would this idea be extended to creative editing and arrangement of randomly generated AI fragments?
And in practice, will a distinction emerge between human creative involvement as musical compositor (constructing a final sonic 'image' by combining layers of previously-created/computer generated material) vs composer - as poles in a 'creative continuum' between arranging/editing and innovative exploration/expression?
[ link to this | view in chronology ]