This one is fairly incredible. Bloomberg LP's main business is selling ridiculously expensive terminals to Wall Street/financial folks for tracking market information. While I understood why they were able to succeed early on, I've been shocked that the internet hasn't seriously disrupted their business over the past decade or so. However, the company also has a pretty big journalism business as well (even owning Business Week, which it bought for pennies a few years ago). Now it's coming out that the journalists at Bloomberg had all sorts of access to how customers use the terminals.
Until recently, all Bloomberg employees could access information about when and how terminals were used by any customer. But after complaints by Goldman Sachs and JP Morgan, Bloomberg says its 2,000 or so journalists no longer have access to that information, though other staff still do. Bloomberg has more than 15,000 employees.
The banks were concerned that Bloomberg News was keeping tabs on terminal usage in order to aid its reporting. JP Morgan specifically cited coverage of the bank’s disastrous derivatives trading, known as the “London Whale,” which Bloomberg was the first to reveal.
Incredibly, the reporters also had access to "help" transcripts of any customer and could call them at will, which apparently some of them did for fun.
Several former Bloomberg employees say colleagues would look up chat transcripts of famous customers, like Alan Greenspan, for amusement on slow workdays. The transcripts were typically mundane and hardly incriminating, but who wouldn’t enjoy watching a former US Treasury secretary struggle to use a computer? And, in theory, the substance of someone’s query to customer service could reveal specific information that he’s interested in, tipping off a reporter to a story.
These are the kinds of things that small companies sometimes screw up with poor controls over information. But a massive company like Bloomberg -- especially when it deals with critical financial information -- you would think would have much tighter controls on information. I'd be curious if this violates whatever privacy policies Bloomberg has with its customers. At the very least, it should make Bloomberg customers pretty damn skeptical of continuing to use their terminals. Seems like a huge opportunity for competitors with better controls to step in.
Last month Techdirt wrote about the case of the giant pharma company AbbVie seeking to prevent the European Medicines Agency from releasing basic health safety data that AbbVie claims contains commercially sensitive information. Unfortunately, an interim injunction has just been granted to that effect:
The European Medicines Agency (EMA) has been ordered by the General Court of the European Union not to provide documents as part of two access-to-documents requests until a final ruling is given by the Court. These interim rulings were made as part of court cases brought by two pharmaceutical companies, AbbVie and InterMune. The companies are challenging the Agency's decisions to grant access to non-clinical and clinical information (including clinical study reports) submitted by companies as part of marketing-authorisation applications in accordance with its 2010 access-to-documents policy.
As the EMA notes, it's not as if the release of this data is unprecedented:
Since November 2010, the Agency has released over 1.9 million pages in response to such requests. This is the first time that the policy has been legally challenged.
That obviously raises the question of why AbbVie and InterMune have problems with drug safety data being released when other companies don't. Fortunately, there is very broad support for the EMA's attempt to make this important information available for other researchers to check and analyze:
Since the two pharmaceutical companies filed these legal actions, the EMA has received more than 30 statements of support from various stakeholders, including the European Ombudsman, national competent authorities, members of the Agency's Management Board, Members of the European Parliament, academic institutions, non-governmental organisations, citizens' initiatives and scientific journals, some of whom have also applied to formally intervene in defence of the EMA at the Court.
There's a crucially important principle here, that public safety must outweigh any claims of commercial confidentiality. Let's hope that the General Court of the European Union recognizes that in its final judgment, which will have a major impact on health and safety not just in Europe but, as a knock-on effect, around the world too.
One of the many complaints about the "six strikes" Copyright Alert System (CAS) in the US is the fact that while it doesn't directly lead to litigation, there is nothing in the agreement that prevents copyright holders from seeking out and using information from the six strikes system in copyright infringement lawsuits. And, surprise surprise, it appears that at least one copyright trolling operation has jumped to the front of the line in testing this out. Malibu Media, who was already building up quite the reputation as a copyright troll (not quite Prenda-like, but still up there), is trying to get Verizon to cough up a ton of information, including details from its six strikes system.
As TorrentFreak notes, the list of information demanded via subpoena has been culled down to the following:
DMCA notices and if applicable six strike notices sent to the applicable subscribers.
Defendants’ bandwidth usage.
Information about the (reliability of the) correlation of the IP-Address to the subscriber for purposes of use at trial.
Content viewed by Defendants to the extent the content is the same show or movie that Plaintiff learned from third-party BitTorrent scanning companies that Defendants also used BitTorrent to download and distribute.
So far, Verizon (who has been one of the better companies in resisting copyright trolls) is objecting to handing over the information and has so far refused to do so, arguing that it does not wish to help "shakedown tactics" by copyright trolls. Malibu is now trying to have the court compel Verizon to cough up the info. Given that we'll likely see more of this, how the court responds should be worth following.
My goodness. Yesterday we posted about Rep. Louis Gohmert's incredible, head-shakingly ignorant exchange with lawyer Orin Kerr during a Congressional hearing concerning "hacking" and the CFAA. In that discussion, Gohmert spoke out in favor of being able to "hack back" and destroy the computers of hackers -- and grew indignant at the mere suggestion that this might have unintended consequences or lead people to attack the wrong targets. Gohmert thought that such talk was just Kerr trying to protect hackers.
I thought perhaps Rep. Gohmert was just having a bad day. Maybe he's having a bad month. In a different hearing, held yesterday concerning ECPA reform, Gohmert opened his mouth again, and it was even worse. Much, much worse. Cringe-inducingly clueless. Yell at your screen clueless. Watch for yourself, but be prepared to want to yell.
The short version of this is that he seems to think that when Google has advertisements on Gmail, that's the same thing as selling all of the information in your email to advertisers. And no matter how many times Google's lawyer politely tries to explain the difference, Gohmert doesn't get it. He thinks he's making a point -- smirking the whole time -- that what Google does is somehow the equivalent of government snooping, in that he keeps asking if Google can just "sell" access to everyone's email to the government. I'm going to post a transcript below, and because I simply cannot not interject how ridiculously uninformed Gohmert's line of questioning is, I'm going to interject in the transcript as appropriate.
Rep. Gohmert: I was curious. Doesn't Google sell information acquired from emails to different vendors so that they can target certain individuals with their promotions?
Google lawyer whose name I didn't catch: Uh, no, we don't sell email content. We do have a system -- similar to the system we have for scanning for spam and malware -- that can identify what type of ads are most relevant to serve on email messages. It's an automated process. There's no human interaction. Certainly, the email is not sold to anybody or disclosed.
Gohmert: So how do these other vendors get our emails and think that we may be interested in the products they're selling.
Okay, already we're off to a great start in monumental ignorance. The initial question was based on a complete falsehood -- that Google sells such information -- and after the lawyer told him that this is not true, Gohmert completely ignores that and still asks how they get the emails. It never seems to occur to him that they don't get the emails.
Google lawyer: They don't actually get your email. What they're able to do is through our advertising business be able to identify keywords that they would like to trigger the display of one of their ads, but they don't get information about who the user is or any...
Gohmert: Well that brings me back. So they get information about keywords in our emails that they use to decide who to send promotions to, albeit automatically done. Correct?
NO. Not correct. In fact, that's the exact opposite of what the lawyer just said. Gohmert can't seem to comprehend that Google placing targeted ads next to emails has NOTHING to do with sending any information back to the advertiser. I wonder, when Rep. Gohmert turns on his television to watch the evening news, does he think that the TV station is sending his name, address, channel watching info, etc. back to advertisers? That's not how it works. At all. The advertisers state where they want their ads to appear, and Google's system figures out where to place the ads. At no point does any information from email accounts go back to anyone. And yet Gohmert keeps asking.
And not understanding the rather basic answers. Unfortunately, the lawyer tries to actually explain reality to Gohmert in a professional and detailed manner, when it seems clear that the proper way to answer his questions is in shorter, simpler sentences such as: "No, that's 100% incorrect."
Lawyer: The email context is used to identify what ads are most relevant to the user...
Gohmert: And do they pay for the right or the contractual ability to target those individuals who use those keywords?
Lawyer: I might phrase that slightly differently, but the gist is correct, that advertisers are able to bid for the placement of advertisements to users, where our system has detected might be interested in the advertisement.
Gohmert: Okay, so what would prevent the federal government from making a deal with Google, so they could also "Scroogle" people, and say "I want to know everyone who has ever used the term 'Benghazi'" or "I want everyone who's ever used... a certain term." Would you discriminate against the government, or would you allow the government to know about all emails that included those words?
Okay, try not to hit your head on your desk after that exchange. First, he (perhaps accidentally) gets a statement more or less correct, that advertisers pay to have their ads show up, but immediately follows that up with something completely unrelated to that. First, he tosses in "Scroogled" -- a term that Microsoft uses in its advertising against Gmail and in favor of Outlook.com -- suggesting exactly where this "line" of questioning may have originated. Tip to Microsoft lobbyists, by the way: if you want to put Google on the hot seat, it might help to try a line of questioning that actually makes sense.
Then, the second part, you just have to say huh? The lawyer already explained, repeatedly, that Google doesn't send any information back to the advertiser, and yet he's trying to suggest that the government snooping through your email is the same thing... and Google somehow not giving the government that info is Google "discriminating" against the government? What? Really?
Lawyer [confounded look] Uh... sir, I think those are apples and oranges. I think the disclosure of the identity...
Gohmert: I'm not asking for a fruit comparison. I'm just asking would you be willing to make that deal with the government? The same one you do with private advertisers, so that the government would know which emails are using which words.
Seriously? I recognize that there are no requirements on intelligence to get elected to Congress, but is there anyone who honestly could not comprehend what he meant by saying it's "apples and oranges"? But, clearly he does not understand that because not only does he mock the analogy, he then repeats the same question in which he insists -- despite the multiple explanations that state the exact opposite -- that advertisers get access to emails and information about email users, and that the government should be able to do the same thing.
Lawyer: Thank you, sir. I meant by that, that it isn't the same deal that's being suggested there.
Gohmert: But I'm asking specifically if the same type of deal could be made by the federal government? [some pointless rant about US government videos aired overseas that is completely irrelevant and which it wasn't worth transcribing] But if that same government will spend tens of thousands to do a commercial, they might, under some hare-brained idea like to do a deal to get all the email addresses that use certain words. Couldn't they make that same kind of deal that private advertisers do?
Holy crap. Gohmert, for the fourth time already, nobody gets email addresses. No private business gets the email addresses. No private business gets to see inside of anyone's email. Seeing inside someone's email has nothing to do with buying ads in email. If the government wants to "do the same deal as private advertisers" then yes it can advertise on Gmail... and it still won't get the email addresses or any other information about emailers, because at no point does Google advertising work that way.
Lawyer: We would not honor a request from the government for such a...
Gohmert: So you would discriminate against the government if they tried to do what your private advertisers do?
No. No. No. No. No. The lawyer already told you half a dozen times, no. The government can do exactly what private advertisers do, which is buy ads. And, just like private advertisers, they would get back no email addresses or any such information.
Lawyer: I don't think that describes what private advertisers...
Gohmert: Okay, does anybody here have any -- obviously, you're doing a good job protecting your employer -- but does anybody have any proposed legislation that would assist us in what we're doing?
What are we doing, here? Because it certainly seems like you're making one of the most ignorant arguments ever to come out of an elected officials' mouth, and that's saying quite a bit. You keep saying "private advertisers get A" when the reality is that private advertisers get nothing of the sort -- and then you ignore that (over and over and over and over again) and then say "well if private advertisers get A, why can't the government get A." The answer is because neither of them get A and never have.
Gohmert: I would be very interested in any phrase, any clauses, any items that we might add to legislation, or take from existing legislation, to help us deal with this problem. Because I am very interested and very concerned about our privacy and our email.
If you were either interested or concerned then you would know that no such information goes back to advertisers before you stepped into the room (hell, before you got elected, really). But, even if you were ignorant of that fact before the hearing, the fact that the lawyer tried half a dozen times, in a half a dozen different ways to tell you that the information is not shared should have educated you on that fact. So I'm "very interested" in what sort of "language" Gohmert is going to try to add to legislation that deals with a non-existent problem that he insists is real.
Gohmert: And just so the simpletons that sometimes write for the Huffington Post understand, I don't want the government to have all that information.
Rep. Sensenbrenner: For the point of personal privilege, my son writes for the Huffington Post.
Gohmert: Well then maybe he's not one of the simpletons I was referring to.
Sensenbrenner: He does have a Phd.
Gohmert: Well, you can still be a PHUL.
Har, har, har... wait, what? So much insanity to unpack. First of all, Gohmert seems to think that people will be making fun of him for suggesting that the government should "buy" access to your email on Google. And, yes, we will make fun of that, but not for the reasons that he thinks they will. No one thinks that Gohmert seriously wants the government to buy access to information on Google. What everyone's laughing (or cringing) at is the idea that anyone could buy that info, because you can't. No private advertiser. No government. It's just not possible.
But, I guess we're all just "simpletons."
Seriously, however, we as citizens deserve better politicians. No one expects politicians to necessarily understand every aspect of technology, but there are some simple concepts that you should at least be able to grasp when explained to you repeatedly by experts. When a politician repeatedly demonstrates no ability to comprehend a rather basic concept -- and to then granstand on their own ignorance -- it's time to find better politicians. Quickly.
One of the initiatives gaining momentum around the world is open data -- the idea that, for example, non-personal data affecting the public should be made freely available. That's partly to improve transparency, so that citizens are more informed about what is happening, and partly to stimulate new kinds of business that build products and services based on that data.
An important category of open data that boosts transparency concerns basic drug safety information. Last month, Techdirt wrote about the AllTrials initiative that seeks to have key information about clinical trials placed in the public domain. As part of a wider move towards greater openness, the European Medicines Agency, the main body that licenses drugs in Europe, is starting to make available information that has hitherto been withheld.
AbbVie, a pharmaceutical company has sought an injunction to block Europe's medicines regulator from releasing "confidential" and "commercially-sensitive" information on its blockbuster rheumatoid arthritis drug, a spokeswoman for the U.S. drugmaker confirmed on Sunday.
The Chicago-based company had taken legal action against the European Medicines Agency to stop it from releasing data on the effects in individual patients in clinical trials for its drug Humira, the Financial Times reported earlier on Sunday.
Except, of course, this isn't "confidential" and "commercially-sensitive" information: it's just basic data about its safety and efficacy. Doctors and patients surely have a right to know this before using products that could potentially have serious, even fatal, side-effects.
The project is part of EFSA's continuing commitment to openness and addresses recommendations made by an independent evaluation report of the Authority's performance to further enhance transparency in its decision-making processes. EFSA's Science Strategy also highlights the importance of the Authority playing a leading role in making relevant scientific data more accessible to all interested parties.
Here's one particular set of data that it has now released:
Given the level of public interest, EFSA will make all data on genetically modified (GM) maize NK603 publicly available on its website today (14 January).
Once more, that seems reasonable, since the public ought to be able to find about what is going into the food chain whose end-products it will consume. But some disagree: according to a story on Bakeryandsnacks.com, Monsanto is threatening to sue the EFSA over the release of this data.
What makes this a little confusing is that the company is quoted in that article as saying that it "firmly supports transparency" -- and yet here it is fighting tooth and nail against precisely that. Apparently, Monsanto also wants the regulatory environment in Europe to be "science-based". Modern science requires experimental data to be made available so that anyone can check the validity of the conclusions that have been drawn from it. If it can't be scrutinized, the conclusions can't be confirmed, and it's not science. So, given its call for "science-based" regulation, why does the company want to keep that data hidden? A cynic might almost suspect that Monsanto and AbbVie have something to hide.
There are lots of ways to store information nowadays -- from cloud services to nano-lithography to synthesizing custom strands of DNA. Some methods are cheaper or more convenient than others, but if physical space is really a premium, then encoding a gazillion bits of data on a few grams of DNA seems like the way to go. Here are just a few projects working on using DNA as an archiving medium.
Just recently, we pointed to Google latest Transparency Report, which showed a massive increase in requests for info on users from government agencies. However, it also showed that a much lower percentage of such requests were being honored, raising some questions about how Google handled such requests. Well, wonder no more (or, at least, wonder a little less) as Google has now explained the process by which it handles such requests, going into a fair bit of detail (you have to click through) in terms of the legal requirements and how Google handles different types of requests, and what data Google may be compelled to reveal. However, in an accompanying blog post, Google makes clear that it often pushes back:
When government agencies ask for our users’ personal information—like what you provide when you sign up for a Google Account, or the contents of an email—our team does several things:
We scrutinize the request carefully to make sure it satisfies the law and our policies. For us to consider complying, it generally must be made in writing, signed by an authorized official of the requesting agency and issued under an appropriate law.
We evaluate the scope of the request. If it’s overly broad, we may refuse to provide the information or seek to narrow the request. We do this frequently.
We notify users about legal demands when appropriate so that they can contact the entity requesting it or consult a lawyer. Sometimes we can’t, either because we’re legally prohibited (in which case we sometimes seek to lift gag orders or unseal search warrants) or we don’t have their verified contact information.
We require that government agencies conducting criminal investigations use a search warrant to compel us to provide a user’s search query information and private content stored in a Google Account—such as Gmail messages, documents, photos and YouTube videos. We believe a warrant is required by the Fourth Amendment to the U.S. Constitution, which prohibits unreasonable search and seizure and overrides conflicting provisions in ECPA.
This is definitely good to see -- and lots of other companies should do the same thing. However, it still remains an issue that governments can, and do, get lots of information with limited oversight -- even when companies push back.
Speaking of which, Twitter also came out with its latest transparency report, which highlights the information requests it gets as well. Both companies are really leading the way on transparency here, but it's a shame that these stories are even newsworthy, rather than the way most large companies act.
We've obviously been covering a lot about Aaron Swartz lately, but his case is really just one of many similar cases involving people in positions of authority who simply don't understand basic technology, but feel that something must be illegal because they try to overlay an analog view on a digital world. In the Swartz case, Carmen Ortiz famously used the incredibly misguided and misleading "stealing is stealing" concept. However, as Cory Doctorow has been fond of pointing out lately, we're entering a war on general purpose computing, and this is just one battle front.
Two other recent skirmishes show the same sorts of things happening in slightly different contexts. A few months ago, we wrote about the case of Andrew Auernheimer, the security researcher who's been convicted and likely to face a long period of time in jail for exposing a blatant security hole from AT&T that allowed him (and anyone else) to gather personal data on the owners of any iOS device. Remember, AT&T set up some stupid security, making all of this data public via its own API. Now about to be sentenced, Auernheimer was asked to write up a "statement of responsibility" for the court, and chose to do a blog post in which he calls out what a farce the whole situation is:
The facts: AT&T admitted, at trial, that they “published” this data. Their words. Public-facing, programmatic accesses of APIs happen upwards of a trillion times per day. Twitter broke 13 billion on their API ages ago. This is something that happens more than the entire population of Earth, daily. The government has no problem with this up until you transform the output into something offensive to important people. People with “disruptive” startups, this is your fair warning: They are coming for you next.
The other one of my prosecutors, Zach Intrater, said that a comment I made about Goatse Security, my information security working group, starting a certification process to declare systems “goatse tight” was evidence of my intent to personally profit. For those not in on the joke: Goatse is an Internet meme referencing a man holding open his anus very widely. The mind reels.
I can’t survive like this. I am happy to be hitting a prison cell soon. They ruined my business. The feds get approval of who I can work for or with: they rejected one company because the CEO had a social network profile with an occupation listed as “hacker.” They prohibit me from touching any computer that isn’t federally monitored. I do my best to slang Perl code on an Android device to comply with my bail conditions. It isn’t pretty.
Meanwhile, up in Canada, there's been a fair bit of talk about how Dawson College computer science student Ahmed Al-Khabaz was expelled for discovering a security hole in a system used across many Canadian colleges to store personal data of students. In his case, part of the problem was that, after alerting people to the hole, he went back a few days later to run a script to see if they had closed the hole. This caused the company that managed the system to accuse him of criminal activity:
“It was Edouard Taza, the president of Skytech. He said that this was the second time they had seen me in their logs, and what I was doing was a cyber attack. I apologized, repeatedly, and explained that I was one of the people who discovered the vulnerability earlier that week and was just testing to make sure it was fixed. He told me that I could go to jail for six to twelve months for what I had just done and if I didn’t agree to meet with him and sign a non-disclosure agreement he was going to call the RCMP and have me arrested. So I signed the agreement.”
Even with the signed agreement, Dawson expelled him. While Dawson stands by its decision, the company Skytech says that it's now offered to hire him part time.
Yes, in all three of these cases you can make a case that what the individual did went further than others would go. Some might call it discourteous. Swartz downloaded a lot more than the system intended, even though the network was open and the terms allowed for unlimited downloads. Auernheimer didn't just find the hole, but he scraped a bunch of data and sent some of it off to a reporter. Al-Khabaz didn't just find the security hole, but he also went back and probed the system again later. But, in the context of someone who lives in this kind of world and understands technology, all three represent completely natural behavior. If the technology allows it, why not probe the system and see what comes out? It's the natural curiosity of a young and insightful mind, looking to see what information is there. When it's made available, how do you not then seek to access it?
But there is a fundamental disconnect between an older, non-digital generation who doesn't get this. They think in terms of walls and locks, and clear delineations. The younger generation, the digital native, net savvy generation looks at all of this as information that is available and accessible. The limitation is merely what they can reach with their computer. But this isn't a bad thing -- this is how we discover new things and build and learn. Treating that as criminal behavior is insane and backwards. It's trying to apply an analog concept to a digital world, and then criminalizing exactly what the system allows and what we should be encouraging people to do -- to push the network, to explore, to learn and to access information.
This is a culture clash, of sorts, but it represents a real problem, when we're criminalizing the most curious and adept computer savvy folks out there.
The Connecticut school shooting has pushed the discussion of gun control back into the media spotlight, along with providing a convenient soapbox for lawmakers, lobbying groups and pundits willing to politicize tragedies to push their agendas through. There's been a lot of vitriol on both sides of the issue, with discussion of Second Amendment rights often leading those involved to forget all about the opposing side's First Amendment rights.
Published under the fear-inducing title "The Gun Owner Next Door: What you don't know about the weapons in your neighborhood," the interactive map drew plenty of heat from gun owners who felt their personal information shouldn't have been made public. The map had the slight potential to affect criminal activity, either by steering would-be burglars to safer, weapon-free households, or to give these same hypothetical opportunists a list of addresses from which to poach guns while their owners were at work.
Also troublesome was the inference made ever so lightly by the article's title: that weapons were dangerous, and by extension, so were their owners. The timing of the article was also problematic -- and intentional. The FOIA requests went out after the Newtown shooting, skewing the purpose of the info dump even further.
A red-dotted map indicating clusters of gun owners easily, under the circumstances, continued the connect-the-dots inference: with so many weapons around, surely the non-gun owning citizens of the Lower Hudson Valley had something to fear. In totality, it was a badly timed, name-and-shame piece that painted gun owners as ticking time bombs, opening with the story of a mentally disturbed man who had put together a large cache of unregistered weapons, "without any neighbors knowing" -- something no one would have had any interest in if he hadn't used one of his guns to shoot a neighbor in the head. Quotes on both sides of the issue are scattered throughout, but the implication was clear: guns are dangerous, whether in the hands of their rightful owners, or borrowed by murderers like Adam Lanza.
The question arises as to whether the Journal News should have published this information. Clearly, the gun owners knew (or should have known) their information was a matter of public record. But should it have been used in this fashion -- or at all? Their personal information was always a FOIA request away, but does that grant a press entity the right to tie this info into an agenda-loaded piece?
The answer, of course, is that the Journal News had the right to use it in this fashion, thanks to the information being of public record and the First Amendment. The paper has received tons of criticism for this piece, and rightfully so, but that's how free speech works. The response, an info dump on anyone involved with the Journal News, spearheaded by former lawyer Christopher Fountain, is also how free speech works.
Both info dumps will have their consequences, in some form of harassment, most likely. Fountain's info dump more clearly paints the Journal News staff as villains, with the original piece leaving that on a more implicit level. Neither group involved has any true expectation of privacy, but both have claimed "victim" status. A followup post at the Journal News mentions that it has received threats along with the normal complaints, but that's something it clearly should have expected when it published a map that singled out gun owners for legal activity. (It should also be noted that the headline writers threw some slant into this post as well. The first headline, appearing at 8:39pm on Dec. 25th read "The Journal News/Lohud.com assailed for publishing map of permit holders." The newer headline, published 10:53pm, reads "Journal News' gun-owner database draws criticism.")
Fountain's response, while troubling in its own way, should also have been expected. Many people still labor under the illusion that their private lives are their own, while leaving so much exposed publicly via social networks like Facebook and LinkedIn, as well as by any number of government services. Failing that, there's always the phone book, which still publishes names and addresses of a majority of US citizens -- a service that is considered default unless the individual makes the effort to opt out.
The protections granted by the First Amendment will continue to generate ugliness that's often hard to defend. In this case, it opens a lot of people up to harassment and possible danger. People may decry "irresponsible" journalism, but if the First Amendment is to remain intact, that's going to remain a constant. The solution is always more speech, which can take many forms, many of them just as ugly as the original bit of controversial speech.
Google's latest transparency report is out and the notable bit of info is that governments continue to increase how often they're seeking info about users. The increase there is a steady growth which is immensely worrisome. There's also an equally troubling increase in the attempts to censor content via Google, though in that case, it was relatively flat until the first half of this year when it shot way, way up.
Digging deeper into the data, it's not surprising to see the US top the list (by a wide margin) of governments seeking info from Google. Frankly nothing on that list is all that surprising. Looking at their annotations on takedown requests, it once again seems to show the incredibly thin-skinned nature of those in power, who then seek to abuse that power to censor information that makes them look bad. Just a couple of examples:
We received a request from the office of a local mayor to remove five blogs for criticizing the mayor. We did not remove content in response to this request.
We received a request from legal representatives of a member of the executive branch to remove 10 YouTube videos for alleged defamation. We did not remove content in response to this request.
There are a lot more like that, mostly from countries that have less respect for free speech than the US. However, some of the requests in the US are equally troubling:
We received five requests and one court order to remove seven YouTube videos for criticizing local and state government agencies, law enforcement or public officials. We did not remove content in response to these requests.
This one is concerning. What court ordered a takedown of a YouTube video criticizing local government officials? That seems like it should be public info.
Google also admits to taking down info pursuant to a court order concerning defamatory content, though at least some courts have argued that, thanks to Section 230, sites do not have to remove content, even if it's judged to be defamatory. Still, it's reasonable for Google to decide, as a matter of policy, that if a court finds content defamatory, and a proper court order is issued, that it will remove that content.
Interesting information, if still troubling, given the general trends.