from the we're-all-infringers-otherwise dept
Julian Sanchez has put up a fascinating post discussing how
copyright is really a misnomer in the digital age. He's building off the various discussions about the Google book scanning project and whether or not it's
fair use. The key point he makes is that thinking about a "copy" in this situation can be misleading, because you can achieve the same results without a "copy," though perhaps more awkwardly:
Suppose I tweet that I'm trying to remember which Borges story has that line about how "mirrors and copulation are abominable, because they increase the number of men." Some of my diligent friends hurry to their libraries, flip through their Borges collections, and tweet back the answer--along with a few sentences of the surrounding context. Clearly there's nothing intrinsically objectionable about the search function, and a quotation of a sufficiently limited portion of the whole work in reply would normally be protected by fair use. The problem is just that Google's search--and indeed, any computer search--technically requires that a copy be made. But to my mind, this just underscores how increasingly maladaptive it is to make "copying" the primary locus of regulation in our system of intellectual property.
Technology even complicates the question of just what constitutes a "copy"--an intriguing issue I explored in a few articles back in my days at Ars Technica. Imagine, for instance, that Google took a different approach to indexing in hopes of avoiding thorny copyright questions. Instead of storing "copies" of each book, suppose they created a huge database called Google Concordance, consisting of an enormous catalog of every word or short phrase someone might want to look up, followed by a long list, like a kind of super-index, specifying the location on every page of every book in which that word or phrase appears. ("Aardvark: Blackwell Guide to the Philosophy of Computing and Information, Page 221, Line 17, word 3...") Obviously, the Google Concordance would be a very valuable and useful reference text, and nowhere in the database would you find anything resembling a "copy" of any of the cataloged works. But just as obviously, it would contain all the information a clever programmer would need to reconstruct an arbitrary portion of the original text on the fly, assuming the database could be queried fast enough. You can imagine someone creating certain kinds of “derivative works” in a similar way: If you don't want the RIAA taking down your mashup, you might try to offer it as an algortithm specifying time segments of component tracks to be combined in a particular manner... an algorithm that might produce gibberish or Girl Talk depending on what files you feed it.
In a sense, it's always the processing algorithm that determines whether a particular binary string is a "copy" of a work or not. Open an MP3 of a Lady Gaga track in a text editor and you'll get a wholly original work of experimental literature--though not one anybody (except possibly Lady Gaga) is likely to be interested in reading. For that matter, Google's database is just an enormous collection of ones and zeroes until some program processes it to generate human-readable output. I distinguished my hypothetical Google Concordance database from a collection of copied books, but if you point to a particular file and ask whether it contains the Concordance or copies of the books, there's a very literal sense in which there just is no fact of the matter until you know what algorithm will be used to render it as alphanumeric text. This may sound like airy metaphysical hairsplitting, but the power of computers to rapidly aggregate and process dispersed information on a global network is likely to create genuine practical complications for a legal framework that takes discrete, physically contiguous chunks called "copies" as its fundamental unit of analysis. Legally speaking, it would seem to make an enormous difference whether books are scanned and stored as books, or as a comprehensive concordance database maintained by Google, or as a series of hundreds or thousands of complementary partial concordances dispersed across many servers (or even individual hard-drives linked by a p2p network). Given sufficient bandwidth and processing speed, it might make no difference at all in practice. Maybe we should take that as a hint to reexamine our categories.
Those three paragraphs do such an amazingly beautiful job showing why copyright is often the wrong tool for the job it's trying to do. It's focused on the wrong thing. And while the examples above are taking things to an extreme (though, one not out of the near-future realm of possibility), it's really the same problem that we face today all the time. For example, we've discussed the whole question of what "copy" is made when someone
links to a site. If you're actually reading what copyright law says, there's nothing infringing in linking. But, at the same time, every time you visit a website, you're technically making a copy -- which could be considered infringement. More or less we sort of make up the rules as we go along with technology, because applying the letter of the law just doesn't make sense. Julian's description above is just taking that basic concept and taking it out slightly further.
He goes on to point this out, while also discussing how copyright law was really designed back in an age when making copies was
expensive, and likely limited to those with commercial intent of some sort, as opposed to what we have today, where you make copies just to
do almost anything on a computer. The focus on the "copy" aspect really just doesn't make sense, and Sanchez argues that perhaps it should be removed from the debate entirely, since it's really just not applicable any more:
Instead of ginning up exceptions to a general prohibition on copying just to permit publicly valuable use of content, maybe we should just admit that "copying" no longer makes sense as a primary locus of intellectual property regulation. Fair use analysis typically employs a four factor test, but the upshot is usually to see how a particular type of copying would affect the market for the original work--which makes sense, given that the purpose of copyright is to give creators a financial incentive to produce and distribute new works. If that's fundamentally what we care about, though, a default property-like right of control over copying, which now has to be riddled with exceptions to allow almost any ordinary use of content, looks like an increasingly circuitous Rube Goldberg mechanism for achieving that goal. I'm not sure what the alternative would be--or even whether rejiggering the basic categories would alter the the underlying analysis much. But--just off the top of my head--you could imagine a system where the core offense was not "copyright infringement" but some kind of tort of unfair competition with an original work. In many cases it would yield the same practical result, but at least we'd reorient the public discourse around "copyright" to focus on measurable harms to creators' earnings--and ideally get away from the confused notion that copying without permission is somehow equivalent to "stealing" by default unless it fits some pre-established exception.
Filed Under: copies, copyright, digital