Telcos Deny Trying To Turn FCC's Open Network Diagnostics Into A Closed, Proprietary Affair
from the well-of-course-they-are dept
The FCC has been working with M-Lab to measure basic network diagnostics using an open source solution, providing public information about internet network performance. This seems like a good thing... though you can see why not everyone would like data public about the performance of their networks. Over the weekend, a warning went up that the telcos are pushing the FCC to stop using M-Lab and switch to their own ISP-managed diagnostics tools. Vint Cerf is raising the alarm about this:Recently, the FCC measurement program has backed sharply away from their commitment to transparency, apparently at the bidding of the telcos in the program. The program is now proposing to replace the M-Lab platform with only ISP-managed servers. This effectively replaces transparency with a closed platform in which the ISPs -- whose performance this program purports to measure -- are in control of the measurements. This closed platform would provide the official US statistics on broadband performance. I view this as scientifically unacceptable.The FCC keeps insisting that it's committed to openness -- but all too frequently seems to give in to telco demands. So this warning is concerning.
For the health of the Internet, and for the future of credible data-based policy, the research community must push back against this move.
For what it's worth, the telcos are claiming that Cerf is overreacting. In a response to his call for action, Verizon's David Young responded that there's nothing to see here, and that M-Lab and the telco efforts have co-existed and can continue to co-exist going forward.
Vint breathlessly suggests that the FCC is now backing away from this openness "at the bidding of the telcos" and claims the program is proposing to replace the M-Lab platform with only ISP-managed servers. THIS IS FALSE. ISPs have made no such request of the FCC nor has the FCC proposed to eliminate use of M-Lab’s servers.As with many such disputes, the reality may be somewhere in between the two claims here. It seems like Cerf's fear is that by establishing the telcos' servers on equal footing with the M-Labs' open setup, it opens the door to replacing the M-Labs' efforts and then potentially locking up the data. Young is correct that the openness is mainly due to FCC policy at this point, but that policy is dependent on the current leadership of the FCC, which could change. At the very least, it would be nice to see a stated commitment to keeping the information open on an ongoing basis, so that there isn't any need to worry going forward.
What has been proposed is that, in addition to continuing to use the data collected via the M-Lab servers, the FCC and SamKnows may also rely on the ISP provided servers that have been in use since the beginning of the project. These ISP-provided servers meet the specifications required by SamKnows as do the M-Labs servers. In fact, it was only because of the presence of these non-M-Lab, ISP-donated servers, that SamKnows was able to identify problems with an M-Lab server that was affecting the results of the tests being conducted. M-Labs did not identify this server problem on their own. It was only fixed when SamKnows brought the issue to their attention. By the way, this problem forced the FCC to abandon a month's worth of test data, extend the formal test period and delay production of their report. Later, another M-Lab server location had transit problems that again affected results. This was the second M-Labs-related server problem in two months and once again, it was SamKnows, using the ISP-provided servers as a reference who identified the problem and brought it to M-Labs attention.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: broadband, fcc, network diagnostics, open, proprietary, telcos, vint cerf
Companies: verizon
Reader Comments
Subscribe: RSS
View by: Time | Thread
commitment to openness
[ link to this | view in chronology ]
Re: commitment to openness
[ link to this | view in chronology ]
i am a sam knows participant...
to wit: starting at just before xmas 2011, our 3 Mbs DSL was *almost* unusable (and *was* -in fact- unusable for -you know- crazy stuff like watching videos or listening to music online) for almost 6 freaking MONTHS...
needless to say, calling our ISP resulted in nothing but lies and bullshit (and NOW they say we NEVER called during this 6 month period, the lying bastards!)...
the monthly report they gave me during this time showed the EXTREME variable speed, but didn't reflect that we were getting 1/10th to 1/20th the speed during our 'normal' usage time : from after-work-o-clock, to midnight...
sure, i bet if you measured at 3-4 in the morning, the speed was *somewhat* better; but for 90% of the time, IT SUCKED...
in any event, either they are not measuring 'real' performance, are taking random samples which didn't reflect our crappy service, or the ISP was spoofing the connection, who knows...
but -you know- putting the foxes in charge of the henhouse is always a good idea...
art guerrilla
aka ann archy
eof
[ link to this | view in chronology ]
What Verizon's David Young should have said when confronted was "Look over there! They are trying to sneak in SOPA again!" While everyone turns to look, he should drop a smoke bomb and let out an evil chuckle while running all the way to the bank.
=P
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Comments
2. that funny bar on the bottom is stupid.
Ok,
For those that understand a few things about BENCHMARK programs..and how MANY corps have inserted their OWN code to bypass or MOD the program to work BEST on their OWN CARDS.
Then comes the idea of a CORP offering you to USE a certain SPEED program to test their SITE..
there are many things to SEE/TEST when you test a site, and connections.
OS-LAG
SITE-LAG
How many JUMPS-LAG
SYSTEM-LAG
Even your video card can add LAG..as windows WAITS for your video to DO SOMETHING before it decides to keep connecting.(fun isnt this)
LAG is a general term. Different programs TEST in different ways also. from JUST testing from your NET card to another NET CARD, is very quick. TESTING a PROGRAM, transfer and render, and then RETURN that program is more thorough. AND TESTS MORE THEN ping from 1 machine to another.
I wont even get into TRAFFIC monitoring but certain GROUPS, which can also ADD to your lag times..
For those of us OLDER then dirt, we remember some of the OLD programs that DID something, in a straight forward fashion and gave us DETAILS and information we could use that was TRUTHFUL. and in a way would tell us WHERE the problems were.
[ link to this | view in chronology ]
Re: Comments
[ link to this | view in chronology ]
Re: Re: Comments
it would hardly be surprising if ISPs rigged their benchmarks too...
gpu manufs went (and prob still do) to EXTREME lengths to try and game the various popular graphics benchmarks...
...and it worked ! they would beat the other guys by reverse-engineering the benchmark code, and figuring out how they could trick it, anticipate it, or otherwise game the testing software/hardware...
the point being -made in the concurrent article about leahy's cameo, and the subsequent private showing that wasn't a gift 'cause they gamed it- *whatever* 'laws' (how quaint), 'rules', 'regulations', 'guidelines', 'by-laws', or other strictures we mere 99% *attempt* to emplace upon our betters, is ONLY worth the enforcement we can engender...
if we can't enforce (even weak-tea laws), then laws are all but meaningless... in fact, *worse* than meaningless, because they offer the *appearance* of lawfulness, when there is none...
harsh laws for us 99%, with draconian enforcement; and squishy, malleable, hardly-worth-mentioning 'laws' for the 1%, and those unenforced, at that !
i am certain that is a sure-fire recipe for a stable society...
art guerrilla
aka ann archy
eof
[ link to this | view in chronology ]
FCC take on story
The FCC is not considering replacing the Measurement Labs infrastructure. As part of a consensus-based discussion in the Measurement Collaborative, a group of public interest, research and ISP representatives, we have discussed how to enhance the existing measurement infrastructure to ensure the validity of the measurement data. Any such enhancements would be implemented solely to provide additional resiliency for the measurement infrastructure, not to replace existing infrastructure. Any data gathered would be subject to the same standards of data access and openness.
We look forward to continue to work with all participants in a process that has provided American consumers and the research community with network performance data of an unmatched scale and scientific rigor. We appreciate the contributions of all participants, in particular Measurement Labs, to this effort.
Henning Schulzrinne
CTO, FCC
[ link to this | view in chronology ]
Re: FCC take on story
[ link to this | view in chronology ]
Re: FCC take on story
[ link to this | view in chronology ]
Network Testing
[ link to this | view in chronology ]