I've never understood the hype about QR codes. They appeared one day, and then suddenly every advertiser made them a priority, plastering them all over everything in print. It has always seemed like undue obsession with something that, ultimately, is not that useful to very many people -- and that's assuming most people even know what they are. I was pleased to discover that I'm not the only one: the Guardian has set up a Tumblr called WTF QR CODES to catalog the many bizarre and inappropriate uses of the technology:
Most people look at a QR code and see "robot barf", but marketers seem to think they are a must-have technology for their advertising campaigns. In their minds, eager consumers wander around with their smartphones, scanning square codes wherever they appear. As a result, the codes appear just about everywhere, and often in some really absurd places.
The examples range from the fairly mundane (QR codes in the subway, where there is no data reception and where they are often located on the inaccessible side of the tracks) to the completely outlandish and even dangerous (huge QR codes towed behind airplanes, or printed on highway-side billboards).
There's one thing the article doesn't mention that I think is an important point: even if QR codes were popular, they would be a doomed transitional technology no matter how you slice it. Image recognition technology has been progressing rapidly and is already being used in products like Google Goggles, which means visual machine languages are going to be unnecessary. The tech isn't perfect yet, but it's already at the point that smartphones are capable of recognizing ads based on color, configuration and other indicators. As visual search becomes more common, consumers are going to get used to the idea that they can snap a photo of anything and find related information online—and the QR code will be officially obsolete (at least as a marketing tool).
Until then, I guess advertisers will keep slapping them on everything from bananas to condoms.
It's well-known that movable-type printing started (at least in the Western world) with the Gutenberg Bible, which all-but-singlehandedly ushered in a new era of literature distribution. To this day, the Bible remains one of the most-printed books of all time, and it's interesting to learn that it still plays a role in pushing publishing technology forward. The Christian missionary initiative Every Tribe Every Nation (ETEN) is working to make ebook Bibles available in as many languages as possible, on as many platforms as possible—and in doing so, they're solving technical problems that few others are addressing:
Now, it turns out, the old missionary impulse is being turned towards some extremely difficult technical challenges: as Mark Howe [who works on the project] has said, "For all the issues that are still to be solved, ETEN is trying to do things that the world's biggest tech companies haven't cracked yet, such as rendering minority languages correctly on mobile devices. There's a unity among Bible translators and publishers that stands in stark contrast to the fractured, fratricidal smartphone industry." And of course, once these technical challenges are met, it won't be Bibles only that people can get on their mobile devices: whole textual worlds will open up for them.
Much of the innovation has to do with niche languages (they have translations in Potawatomie and Hawai'i Pidgin) and the developing world: ETEN is tackling translation challenges that are of low priority for many businesses since they aren't interested in entering those markets—at least not enough, or not yet. But if ETEN succeeds in making this kind of mass-internationalization easier, it will be sure to have a ripple effect as others make use of the technology. The Bible may once again be responsible for driving a communications revolution.
Over the last couple of weeks, there has been growing buzz about Matter, a startup that is proposing a new business model for long-form science journalism and is raising funds on Kickstarter. Their approach is fairly straightforward: each week, they will produce one piece of ultra-high-quality journalism on a science or tech issue, and sell it for 99 cents on as many platforms as possible. It's less a paywall around a publication, and more an attempt to commoditize articles as discrete, sellable objects.
Will it work? The big debate has been between Felix Salmon (who likes the idea, and has been quite sanguine on paid content ever since the moderate success of the New York Times paywall) and Stephen Morse (who called Matter a "scam" and its creators "snake oil salesmen"—though he later said those terms were intentional hyperbole). Yesterday, they took to YouTube to hash it out in person:
There are a lot of good points both for and against Matter. For one thing, they've already doubled their $50,000 funding goal on Kickstarter, which at least demonstrates that people are willing to part with their money for something like this—but Kickstarter backers aren't necessarily representative of the broader consumer crowds they will need to court with the actual product. One of Salmon's key points is that since they are raising funds there, instead of going to venture capitalists, their business goals are less daunting: they just need to build something sustainable, not something that will make millions of dollars. The creators have said they don't plan to pull salaries unless the company is a massive hit. They've put a lot of focus on keeping their costs down, so, overall, their financial goals are very different, and a lot more attainable, than the average startup.
On the other hand, as Morse points out, there is plenty of great content out there for free. He doesn't believe there is truly an untapped demand for this kind of content, so Matter won't be able to compete. Salmon thinks the Kickstarter numbers say otherwise. Either way, the question is the same: can Matter produce content that is so good and so unique that people will want to pay for it?
I'm reminded of News Corp.'s iPad-only product The Daily, which launched last year to a lot of hype but quickly began losing engagement and talent. People were asking the same question: could The Daily manage to include such great content that people will need to read it in order to stay in the loop? Obviously, it couldn't.
Matter is more focused than The Daily and is targeting an entirely different audience with a higher standard of journalism, which gives it a leg up in that regard—but I still doubt its potential for one key reason that isn't getting much attention: the sharing barrier. The problem with putting a price tag on online content is that it actually reduces the appeal of that content, because one of the things people value most about good content is the ability to share and discuss it with their social circle. Exclusivity is a minus, not a plus, with most kinds of content (financial news being an exception, which is why most of the more successful paywalls online are on financial sites). Some people will be willing to pay 99 cents for an article, but a lot of them won't be willing to ask their friends to pay too by posting a link on Facebook, Twitter or their blog. Those who do are sure to get a lot of confused replies asking "wait, I have to pay?" Moreover, with a pay-per-article model instead of a subscription model, readers are going to have to decide each week if they want to keep paying. The mental transaction cost of 99 cents may be extremely low, but it adds up when you multiply it like that. These factors are going to make it very difficult to grow and retain their readership.
If Matter streamlines their costs enough, and their content is good enough, it's entirely possible that they can build a small core group of readers that keeps the one-article-a-week model afloat—but if that's the best possible outcome, is this really the best possible approach? Journalism online needs more than small-scale sustainable models, it needs ways to grow and expand, and that is never going to happen without advertising dollars. As Salmon says, Matter is trying to do "something which has historically been extremely rare, in the world of journalism: selling stories to readers, as opposed to selling readers to advertisers", and that means they are tackling the wrong problem. They still plan to include some advertising in the articles, but they should be putting a lot more focus on that side of the equation. There are companies out there that want to support this kind of content, and Matter's low-cost, VC-free model puts them in the perfect position to experiment with innovative sponsorship models—an approach that would be bolstered by opening up the content instead of locking it down, ultimately creating much bigger opportunities to fund quality journalism and turn a profit.
One of the points we often try to make at Techdirt is that the effects of disruptive technologies are going to be felt far beyond the entertainment and publishing industries—they are not limited to the online world. The internet creates abundance of information, but it also creates a push towards decentralization in all things, and that's one of the big ways it intersects with the physical: although you can't download a car, you can create whole new systems for buying, selling, renting, reviewing and maintaining cars, and those systems will replace established but less-efficient ones.
Nobody is immune—not even the last disruptor. Companies like Zipcar changed the game with their car-sharing services, but they are already facing new challengers. RelayRides, launching nationally this week, has a model that takes things one step further:
While those companies own fleets of cars, RelayRides is entirely peer-to-peer — if you have a car, then you can make it available for rental when you're not using it. RelayRides says the average car owner makes $250 a month from the program.
Since it takes advantage of the cars already on the road, founder and chief community officer Shelby Clark argues that peer-to-peer carsharing can have a big impact—after all, a fleet-based company couldn't simply declare one day that it's launching nationally.
That's especially true in non-urban areas. For example, Zipcar doesn't have any cars available in the Los Angeles suburb where I grew up, and it's hard to imagine that establishing a fleet there would make economic sense anytime soon.
How big and how successful this approach will become remains to be seen, but it's a creative idea that makes a clear point: disruption can happen anywhere, to anyone. As the entertainment industry continues to fight progress, experts from every side of the debate love to make profound-sounding statements about how the internet has changed our media consumption habits, but that's old news. From mobile-based taxi & limo services to the coming era of 3D printers and things like the Pirate Bay's Physibles site, digital technologies are disrupting a lot of things, not just media. Governments and industries cannot continue getting bogged down in tiresome debates about saving obsolete business models—not if they want to have any hope of embracing the opportunities, and solving the potential problems, of a fast-approaching future.
Last week, over on our Step 2 discussion platform we kicked off a discussion on what an "innovation agenda" might look like for a US-politician for 2012. What kinds of regulatory changes should they be focused on? This effort, done in partnership with Engine Advocacy, has already kicked off a nice discussion over there with some interesting ideas being tossed around. If you haven't yet, please join in the discussion. I'm not surprised that copyright issues and open internet issues top the list of things most interesting to folks -- the SOPA/PIPA debate has pretty much guaranteed that. I am a little surprised that issues around helping skilled entrepreneurs -- the folks who create jobs -- was seen as less of an issue compared to some of the others on the list. Either way, the discussion is still going on there, and we'll be taking it further over the coming weeks and months, so feel free to join in.
Because of their low-cost and small size they can then be shipped to activists and NGO's in areas where free-speech is difficult.
"This is especially useful for activist organizations, human rights organizations, any group composed of a few dozen people who need to have an internal secure communication service," said Mr Kobeissi.
Small, portable Raspberry Pi computers set up to run Cryptocat, he believes, may be a quick way to build such a service.
An interesting consequence of Moore's Law and the ready availability of free software is that powerful computers can now be produced for just tens of dollars, and in an extremely small package. The low cost means that organizations supporting activists can send in many such systems to countries with human rights problems, and replace them if they are discovered and confiscated or destroyed. The size makes it much easier to import them discreetly, as well as to conceal them in countries that try to keep computing under tight control.
And it's not just the Raspberry Pi that will be making this possible. Its high-profile success is likely to mean that in due course other systems will be produced that are cheaper and smaller. That will ensure they are even more popular with the educational market and hackers -- and even more problematic for oppressive regimes.
Amidst the growing enthusiasm for digital texts -- ebooks and scans of illustrated books -- it's easy to overlook some important drawbacks. First, that you don't really own ebooks, as various unhappy experiences with Amazon's Kindle have brought home. Secondly, that a scan of an illustrated book is only as good as the scanning technology that is available when it is made: there's no way to upgrade a scan to higher quality images without rescanning the whole thing.
Both of these make clear why it's good to have physical copies as well as digital versions: analog books can't be deleted easily, and you can re-scan them as technology improves.
But there's a problem: as more people turn to digital books as their preferred way of consuming text, libraries are starting to throw out their physical copies. Some, because nobody reads them much these days; some, because they take up too much space, and cost too much to keep; some, even on the grounds that Google has already scanned the book, and so the physical copy isn't needed. Whatever the underlying reason, the natural assumption that we can always go back to traditional libraries to digitize or re-scan works is looking increasingly dubious.
Fortunately, Brewster Kahle, the man behind the Alexa Web traffic and ranking company (named after the Library of Alexandria, and sold to Amazon), and the Internet Archive -- itself a kind of digital Library of Alexandria -- has spotted the danger, and is now creating yet another ambitious library, this time of physical books:
In a wooden warehouse in this industrial suburb [in Richmond, California], the 20th century is being stored in case of digital disaster.
Forty-foot shipping containers stacked two by two are stuffed with the most enduring, as well as some of the most forgettable, books of the era. Every week, 20,000 new volumes arrive, many of them donations from libraries and universities thrilled to unload material that has no place in the Internet Age.
As that hints, another important motive for preserving physical copies of as many books as possible is to create the ultimate backup of our digital texts and scans in case of "digital disaster". Kahle himself touched on this in June last year, when he first announced the "Physical Archive of the Internet Archive":
A reason to preserve the physical book that has been digitized is that it is the authentic and original version that can be used as a reference in the future. If there is ever a controversy about the digital version, the original can be examined. A seed bank such as the Svalbard Global Seed Vault is seen as an authoritative and safe version of crops we are growing. Saving physical copies of digitized books might at least be seen in a similar light as an authoritative and safe copy that may be called upon in the future.
As with the Svalbard Global Seed Vault, we naturally hope we will never find ourselves in a situation where we need to call upon analog backups in Kahle's Global Book Vault; but it's good to know they will be there for at least some of those ebooks and digital scans, if we ever do.
Apparently there was some tension at the Mobile World Congress—the world's largest mobile phone trade show—as the growing battle over text messaging took center stage. As you may know, SMS text-messaging is a rip-off, and a huge cash-cow for the mobile telecoms, who charge premium rates for a service that has an effective cost of zero (SMS messages are encoded into regular signals that cell towers have to send anyway). But they are losing a growing chunk of that income to data-based messaging services like BBM, iMessage, WhatsApp, Facebook Messenger and more. Naturally, they aren't happy, and they try to frame it as an unfair disruption of their business model:
Needless to say, mobile companies are not happy at the flood of free messaging services piggybacking on their networks. Telecom Italia chief executive Franco Bernabe told MWC that free messaging services are undercutting the ability of phone companies to invest in their networks. Paid texting, or SMS, has been a cash cow for phone companies which uses minimal network capacity.
The new players "have based their innovation in the mobile domain, without a deep understanding of the complex technical environment of our industry. This is increasingly creating significant problems to the overall service offered to the end user and driving additional investments for mobile operators," Bernabe said.
None of that makes a lick of sense. Bernabe is basically saying that everyone else has a responsibility to not build data apps that compete with telecom services, but unfortunately for him that's not how free markets work. Rather than seeing the huge opportunity that is the growing demand for wireless data access, the telecoms have decided to focus on the one thing that has stopped SMS from being completely replaced already: the lack of a single standard alternative. GMSA, a mobile industry group, has built a new cross-platform messaging service that they hope to get pre-installed on all cellphones and have become the standard for all text, photo and video messaging—though they haven't announced how much they plan to charge for the service. They claim that nine out of ten major device makers have signed up, with all eyes falling on Apple as the probable holdout: Apple is on a crusade to kill SMS messaging, and they likely would have succeeded by now if they weren't committed to their own walled-garden approach that pushes everyone towards iOS.
Of course, the same conference was also attended by the companies that have the telecoms so frightened. Joe Stipher, co-founder of messaging service Pinger, had a wiser perspective on the direction things are headed:
"Text messaging is free, and calling is going to be free," said Stipher, wearing jeans that contrasted with the dark suits favoured by thousands of mobile phone company executives attending the four-day 2012 Mobile World Congress that ended Thursday. "Data is going to be like electricity or water, not totally free, but do you worry about giving someone a glass of water at your home or letting them plug in? No."
I actually think that could be slightly better worded: in the future, there will be no more distinctions like "text" and "voice". Everything is just data anyway. But Stipher is absolutely right that bandwidth is becoming a generic utility, and that's something the telecoms have to accept. For some reason, they are terrified of becoming "dumb pipes"—they want to be "smart pipes" that charge premiums for different "kinds" of data, even though that's basically an imaginary concept. It's an odd attitude, because being a dumb pipe for something that everybody wants is a pretty good position, and if you accept it then you stand to make more money by letting people build whatever they want on top of what you provide. Truly, this would be the smart thing for a pipe to do, and Stipher has some fun with this by co-opting the term for himself. The carriers play along, using their own definition, and what results is an amusing portrayal of the mental disconnect that exists:
[Stipher] explained that "The carriers should be smart, reliable pipes" providing internet data access like utilities give reliable water and electricity, he said. "They need to focus on being good network operators."
[Rene] Obermann [chief executive of Germany's Deutsche Telekom] said carriers are at a crucial point at which they must "develop our own, innovative product suites" through cooperation with the smaller messaging companies. "The smart pipe will be one of the areas where (telecommunications companies) will show their innovation," he said.
Of course, Obermann's own company has a venture capital division that invested $7.5-million in Pinger, so maybe on some level he knows which way the winds are turning.
After years of not suing anyone (but always threatening that it might, someday), Intellectual Ventures has become more and more aggressive of late in suing lots of companies. A few weeks ago it sued AT&T, Sprint and T-Mobile over a bunch of patents that (of course) involved some of IV's favorite shell companies. Just as it was preparing this lawsuit, a VP from IV went public with an attempt to argue that all this litigation is a sign of innovation at work. The article is rather shocking in how it presents its argument. It mainly relies on false claims that correlation means causation, concerning historical periods of innovation and lawsuits over patents. Of course, what it ignores is that the patent fights often come right after the innovation, not before. In other words, the patent battles aren't a sign that innovation is working. Rather it's a sign of patent holders freaking out that others are innovating. It's entirely about hindering innovation, not helping move it forward.
Along those lines, the folks at M-CAM who continue to call out bogus claims in patent lawsuits analyzed the patents in this IV lawsuit and found them... well... lacking:
Our systems found nearly 500 AT&T patents, with similar claims, that predate the fifteen asserted patents. Sprint Nextel also owns 12 patents that predate the asserted portfolio.
M-CAM also questions the claims that these lawsuits have anything at all to do with innovation, and hint at more nefarious reasons for the use of a bunch of shell companies:
Is IV’s patent litigation helping inventors or investors? Considering that the bulk of the patents in suit were each “acquired” from what the USPTO characterizes as a “merger” with a different relatively unknown LLC, we’ll let you decide. Seems to us that it simply represents an attempt to use opacity and “hidden weapons” for a tactical assault having ABSOLUTELY NOTHING to do with innovation. In fact, these kinds of structures are also typically employed for tax “optimization” which is to say, to avoid paying taxes for any economic gains resulting from a successful assault, ahem sorry again, we mean “settlement”.
By the way, you may have noticed that Verizon is conspicuously absent from the list of mobile operators being sued here. That's because Verizon paid the entrance fee and is a "member" in the IV club... which apparently only cost the company $350 million. Oh yeah... and it then became an enabler. One of the patents in the new lawsuit... once was owned by Verizon.
Last month, we wrote about how the USPTO had stepped in to a brewing fight between copyright lawyers and patent lawyers, saying that it believed that submitting journal articles as part of the patenting process was fair use. Apparently, the copyright lawyers working for the scientific journals disagreed... and the fight is on: the journals have sued a bunch of patent lawyers for making use of articles from the journals in preparing their patent applications. The journals, in their desperate desire to squeeze more cash out of everything, were demanding that patent lawyers get an additional license if they wanted to submit copies of journal articles along with patent applications.
While it's rare that you'll find me agreeing with the patent bar on very much, on this one, I'm on their side. The lawsuit, lead by publisher John Wiley, is kind of crazy. We're not talking about people who are getting copies of the journal for free. These are generally people who have a legitimate subscription to the journals, and are submitting copies of the information as part of the patent process -- as they're required to do by law. This is just yet another attempt by the publishers to get paid for every single possible use, even for those who already have legitimate access. And, of course, these journals don't have the best reputation these days, with their attempts to block open access requirements. While there may be some appeal in making it more difficult to get a patent (something where I believe the bar needs to be much, much, much higher), I don't think this is as reasonable way to do so.
In nearly every way, it seems like submitting such a journal article as part of a patent application process should be seen as fair use. It really does fit the kind of key "spirit" of the fair use rule.