Yes, higher frequencies *generally* have lower ranges. But it's not a straightforward thing - some bands are attenuated much more than others. If you go up to the Terahertz range - the next frontier - this is big deal. But also a big deal is that you're limited to line-of-sight, highly directional, transmissions. So you end up with very different designs. A hypothetical design for a city is distribution stations at the tops of tall buildings using a band with good propagation characteristics - you can get hundreds of feet - sending to converters stuck on people's windows. These convert to a different band with limited propagation, so apartments and houses don't interfere with each other. You have to actively aim at the window units, but that was shown to be practical a couple of years ago.
Building the antennas and such is interesting because their scale is on the order of features on chips. So you actually build your antennas right on the chip with the electronics. I saw some samples which were cool: Some of the classic antenna designs, etched on a chip. But you can actually do much better by designing metamaterials.
Complicated stuff, but it was in the "engineering characterization" phase (i.e., we know that it works, now we need to figure out how to do it practically) 4 or 5 years ago. It's coming. -- Jerry/div>
The ideas in the article are barely scratching the surface.
We currently subdivide the frequency spectrum into bands and assign each band to an exclusive use and user within a geographical area. This way of dealing with the spectrum was introduced early in the 20th century to avoid interference: Given the modulation mechanisms available at the time, it was the only approach. Of course, it also lead to a whole visualization of spectrum as "real estate" that could be "owned" and had to be protected.
With SDR, this view of things is unnecessarily limiting - immensely so. Modulation techniques can share spectrum with minimal interference - spread spectrum effectively smears a narrow, tall signal at one "spot" in the spectrum over a much broader but but shorter signal. If two "tall" signals overlap, both are wiped out; if parts of two (or more) "shorter" signals partially overlap, each gets a tiny bit noisier but (up to a point) they both get through. USB (Ultra Wide Band) spread spectrum takes this idea to an extreme (and is used today).
On top of this, "cognitive radio" relies on protocols in which transmitters "skip themselves into the conversation" where they find silent spots. This is how WiFi works today - it's why you can have large numbers of WiFi devices using a pretty narrow piece of spectrum "simultaneously" without interfering (up to a point; everything has some limit).
In the old days, geographical areas were large - typically on the order of large cities - as the high power in on narrow, tall signal carried a long way at the frequencies in use, so transmitters had to be far from each other to avoid interference. Cellular phones are an example of how modern technologies can use much small geographical areas (WiFi and Bluetooth use even small ones). The fact that the same frequencies are assigned across much larger areas - hundreds, even many thousands, of cells - is a product of history and politics, not a necessity of the technology,
Finally, the move to ever-higher frequency bands - which can carry data at ever-higher rates, which implies the need for much lower power to carry the same amount of data (the power needed goes down as the square of the available bandwidth) opens up way more possibilities.
The "spectrum crunch" we face today is more a product of legacy technologies and regulations than of physics, as the Telco's want you to believe. Massive re-engineering will be needed - and the changes will be fought tooth and nail by the incumbents, who will see their "spectrum real estate", in which they've invested fortunes, dissolving away. But the time will come.
Another interesting point to consider: Spread spectrum techniques were originally developed by the military for two reasons: They are resistant to jamming, but they are also difficult to detect. A spread spectrum signal based on a cryptographic spreader, properly run, is visible only as slightly increased noise. If it's low power - all that's needed in a geographically limited piece of a mesh - you have to be very close to even notice that. So the traditional methods of finding and shutting down "rogue signals" don't work well against this kind of technology.
But the real control that the FCC (and analogous regulators the world over) have wielded over the spectrum in the last four decades or so is through regulation of the hardware. The old days of people throwing together radios from parts faded with the newer technologies and higher frequencies even by the 1980's. Stuff moved on chips - eventually, it became impossible for anything but chips to do the job. While it's possible to put together your own police scanner, say, very few people were in a position to do it. Regulate what the few hardware makes are allowed to build and you can keep the vast majority of people out of the police bands if you want.
We just saw a manifestation of this in the regulation of 5GHz WiFi radio controls. You can change some things - but power and frequency is locked down in the hardware, and it's impractical to build your own.
SDR changes all that. It allows you to use stock components - DtoA and AtoD converters - driven by software to implement all that stuff which used to be in easily-controlled hardware. "How are you going to keep them down on the farm after they've seen the big city."
The easy tight regulation of the electromagnetic spectrum that's defined the last hundred years is going to dissolve. There will be battles exactly like the copyright battles we see today. There will be huge technological winners - and losers. But the wireless world 20, 30 years from now will be very, very different.
(a) The mesh itself doesn't necessarily have to run IP; (b) If it does run IP, it can have private addresses inside of it, with the correspondence between internal and external addresses known only to the mesh itself - NAT at the scale of the mesh (c) You might as well build the thing using IPv6 and use self-assigned, rotating IP's - with, again, the addresses only known within the mesh. IPv6 addresses are large enough that you don't have to worry about collisions.
Keep in mind that you don't typically "own" an IP today. Rather, you ISP lends you one so that it can route packets to you. Thus, the ISP knows the correspondence between you as a customer and your IP address at any given time. In the case of a mesh network, it's the mesh itself that acts as your ISP. It, in turn, has to connect to the Internet somewhere - but it has its own address for the mesh as a whole; it doesn't externalize your address. A packet arrives at the mesh and then is somehow (details "TBD") routed through the mesh to you. Only the mesh need know who you are.
(Of course, you could build the mesh with IP "straight through", in which your IP would indeed identify "you". Some meshes will probably be built that way. But it's not the only alternative.)
Jokes about placing the court in Holywood notwithstanding, the issue of where such cases are heard is significant.
In traditional lawsuits, the plaintiff gets to pick the location, as long as they can make a reasonable argument that they have a presence there. A low bar - hence all the patent lawsuits in Texas based on having an empty office with the patent troll's name on the door. If the traditional rules apply to these small claims, you better believe the trolls will have "offices" as inconvenient and expensive for defendants to reach as possible.
It's not clear how to fix this. You can bias things by letting the defendant pick, but that doesn't work when the plaintiff is "the little guy" - e.g., DMCA 512(f) complaints, or an individual writer suing a mega-corp that's copied his work.
Traditional small-claims courts almost exclusively deal with suits brought by individuals or small business, usually involving local, physical transactions with other individuals or small businesses; if a large corporation is involved, it's probably a defendant and the "plaintiff chooses location" rule works. Copyright lawsuits usually go the other way around, so the analogy breaks down./div>
On MacOS, certificates are managed through the Keychain Access application, rather than in the browser itself. Open Keychain - it's in Applications > Utilities. On the left of the window, you'll see either one pane labeled "Category", or two panes, "Keychains" and "Category". If you only see one pane, select View > Show Keychains. Then in the "Keychains" pane select "System Roots". A list of all root certificates will appear on the right. You can click on a column header like "Name" to sort on that column.
Find the certificate you want to remove - CNNIC ROOT is right there - and double-click on it. Details about the certificate will appear. Click the arrow next to "Trust" to open the trust details. Change "When using this certificate" from "Use System Defaults" to "Never Trust".
It's not possible to delete one of the built-in certificates, at least not using the Keychain Access application. (There is a command line utility that can do it, but even then the removal isn't permanent, and the cert may reappear - though it will be marked "Never Trust".)
The publishers of DRM'ed textual material are about to come to a very painful choice point. Up until recently, scanning a book or journal was an annoying manual procedure, and the results were not very clean. The book scanners used by libraries and by other professionals - with such features as automatic page detection so you could scan a pair of pages together, automatic de-skewing so you didn't have to get the book configured exactly right, automatic digital page smoothing to compensate for the curve of the pages - cost tens of thousands of dollars.
Look around a bit and you can find many similar devices at reasonable prices. The only thing that's still quite expensive is automatic page turning - but you can probably live without that. And the prices are only going to drop further, and the software will only get better.
The result of this is that publication in paper form will be likely publishing music on a CD: Soon, huge numbers of people will be able to make a DRM-free "eBook" version with very little time, money, or effort expended - and it'll probably look better than most eBook versions. The files involved are small - you could probably scan every book published in English over a year onto a USB stick.
An obvious response will be to try to stop publishing on paper. But that won't work - an eBook reader's screen is, if anything, easier to scan than a paper book. Just push the next page button. There may be calls to put artificial limits on how fast the page button can be pushed, but given that people skim books to look for material, the reader makers will resist. Maybe readers with LCD screens can be set up to make it hard on the scanners, though I have my doubts. For the e-paper readers - no hope, the screens are just passive displays almost all the time; try to play games like having the letters move around all the bit all the time and the battery will give out quickly.
These publishers are the walking dead. They just don't know it yet.
So ... who's going to take the clip of Rogers's interview, overlay it with big letters saying "LIe!" followed by someone giving all the details of why it's a lie (in the style of political attack ads), and circulate it widely? (I'd do it but I have neither the equipment nor the editing skills to do a good job.)
This is an opportunity that shouldn't be missed./div>
Nah. The City Council should vote to disallow any spending of City funds to prosecute this lawsuit. Bloomberg could certainly ante up the money, but he probably can't give the money to the city attorneys - he'd have to be the plaintiff, and then he would probably be found not to have standing.
Of course, this assumes the usual arrangement of government legislative and executive functions. New York City's government structures are incredible baroque - plus there's an overlay of rights and responsibilities that you would expect to belong to the City, but that the State legislature has chosen to take control of for various (typically really bad) reasons. So I have no idea whether it would even be possible for the City Council to control how City money is spent.
All that said: Bloomberg has, in general, been a very good mayor. He has a couple of blind spots, which for some reason have become more and more visible as his tenure approaches its end. Leaders all have their limitations. Rudolph Guilliani was a much more problematic figure with many more odd blind spots, but not only was he also a pretty successful mayor, but he was certainly the mayor the city needed after 9/11 - at least *shortly* after 9/11, when everyone looked for leadership and a sense that the world and the City would somehow survive and recover.
Given New York City's size and complexity, an effective mayor who doesn't have some strange sides to him ... probably can't exist.
The real fake reviews are the 5-star reviews. They were planted by the Telco's exactly to start this debate, months later thus distracting attention from the book and the real issue: That we have to continue to have the best Internet in the world, provided by the best companies in the world, because those companies are dedicated to helping the NSA keep us all safe by carefully but completely lawfully preventing that precious Internet - a wonderful American invention - from being used by, you know, terrorists and mother-rapers and father-rapers and all those others on the Group W bench.
You're all a bunch of filthy hippies and Communists and traitors who don't appreciate what this great country of ours - not yours - has done for you. If you don't like it, why don't you just get out and go live in one of those workers' paradises you dream of, like Sweden or Eurasia or something.
-- A True Patriot from Iowa or Idaho or one of those places/div>
If the government can demand your hashed password, they can also demand your *actual* password. While a site doesn't *store* that, it has access to it *every time you log in*. After all, that's exactly what you provide in order to log in!
There are protocols (SRP http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol is the most prominent example) in which having full access to the data on the server doesn't permit you to imitate a client (without additional work to brute-force the actual password), Unfortunately, such protocols aren't trivial to retrofit into existing systems as they require significant computation on the client side, so they haven't seen much traction. Perhaps it's time to consider them.
For an interesting example of how to respond to the new environment, have a look at the Northshire Bookstore (http://www.northshire.com/). Northshire is located in Manchester, Vermont - a fairly small community surrounded by smaller communities, many poor. But Manchester is a tourist hub - there's a ton of skiing in the area, and many resorts that get summer traffic as well. The town is a tourist shopping hub. Many years ago, it was filled with locally owned store; for the last ten years or so, the major chains have taken over with outlets. The outlets are fading a bit; we'll see what replaces them.
Northshire has been there through all the changes. The store expanded a couple of years ago, and is, in total footprint, comparable to a medium-sized B&N. But since it started life in what was probably once a hotel, and added rooms here and there, it doesn't *feel* huge - it feels like a bunch of rooms. And it specializes in providing an experience: Knowledgable, friendly staff; regular talks by authors; special displays by local writers; little notes on the shelves from bookstore employees describing their favorites; etc.
Northshire has had a book printing machine (I don't know who makes it) for a couple of years. They also started selling on-line a while back.
I have no connection with the store, other than as a long-time customer: We vacation nearby a couple of times a year, and as a family tradition always include a visit to Northshire on every trip. The store has been busy every time we've been in there. And we always leave with a large collection of new books. In fact, we used to have Borders near us at home. We realized that we almost never went there: Books we needed "quickly" came from Amazon; books that were the result of browsing came from Northshire. Borders just kind of faded from our lives (though we do miss knowing it's there).
I hope Northshire's external appearance of success don't conceal inner dry rot. It's been a tough couple of years for many of the Manchester vendors, and they haven't had to deal with the technological changes in the book business./div>
As far as I can see, not a single comment here so far is from a person in a position to judge whether targeting works. That's because "works" is measured by *the customer who bought the ads*, not by *the person receiving them*.
Advertising is a game of statistics. No matter what medium you use, most ads will be completely ignored by the vast majority of people who see them. That's a reality that businesses have had to accept for years - there's even an old joke to the effect that "Half my advertising money is wasted. If only I knew which half!" In fact, if you consider an ignored ad "wasted money", the fraction "wasted" is over 99%. The only way to judge whether advertising is actually effective is to compare profit - income minus expenses, where expenses includes ads - for different amounts of advertising. Not easy to do, but most businesses have concluded that "wasting" money on ads is actually a worthwhile investment.
Because the actual fraction of ads that pay off is so small, it takes only a tiny increase in the absolute number to make a big difference. Suppose you ran an ad that a million people saw, and it brought in 100 customers you wouldn't have otherwise bought your product. (In most situations, that would be an incredibly successful campaign.) Now suppose targeting doubles that to 200 new customers.
Initially, of the million people who saw the ads, 100 thought they were relevant, while 999,900 thought they were not. With targeting, 999,800 thought they were not. So if you ask those who received the ads whether they got relevant ads, you'll reach the conclusion that an absolutely overwhelming percentage did not. Obviously, targeting *doesn't work*.
And yet, doubling the number of customers the ad brings in is an impossibly high improvement. Advertisers would kill to get it. More to the point - they'd pay a great deal of money to whoever could deliver numbers like that. Obviously, targeting *does* work!
Forbes picked up on this with two articles. One pretty much summarized what the Times said; the other looked into the question of what limits there were on Target should they decide to sell this data. Their conclusion: At present, pretty much none. That data is Target's to do with as they like. Were they to have a privacy policy, they would be bound by it - but they've never published one.
One person they spoke to is certain that this will soon change - that within 5 years, there might be limits on what data Target can collect, and there will certainly be limits on what they can sell. I pretty sure that's true.
By the way, it's important to keep in mind that Target - and every other company that does targeted advertising - always emphasizes that the positive value to the customer in delivering ads that describe products they might actually need. But that's of course not why the companies are doing this. The point is to sell more products. In fact, it's pointed out in these articles that the reason Target is so interested in discovering pregnant women and marketing to them is that it's long been known that young families tend to "bond" to certain brands (and likely places to buy them). Get them early and they'll keep coming back. Those early special discounts will be repaid many times over by full-price purchases.
Is there something wrong with this? Probably not, but just as there's a line where "clever" becomes "creepy", there's a line where "attractive" becomes "manipulative". It's never clear where the line is until after you've crossed it.
Many years ago, the group I worked for at a large company moved to a brand new facility. There was a committee that helped in designing various amenities, like the cafeteria. One decision was on the color. A consultant on the matter recommended (I think) yellow, because studies had shown that people bought more food in a yellow cafeteria. OK … but who is going with yellow good for? It's certainly good for the company running the cafeteria; but for everyone else working there, probably not. Whenever you see clever ideas like data mining to find pregnant women … ask yourself: Cui bono? Who benefits?
"We spent millions to put in that new safety system, and it's been totally wasted - we haven't had an accident in two years!"
It's very nice to say that "the free market" eliminated Microsoft IE's near-monopoly on browsers - but it's a misreading of history. No real "free market" existed, either before or after the change.
*Before* the change, Microsoft made a number of monopolistic moves. Windows represented virtually the entire PC market, and was impossible to attack: The deal offered to hardware vendors by Microsoft was "If you want a reasonable price for Windows, you have to agree to pay *per unit shipped*, whether a unit has Windows on it or not." Windows itself had IE6 embedded. You could, if you were technically adept, install another browser - but IE6 had to stay, because various other pieces of Windows (deliberately) relied on it.
*After* the change, Microsoft was under government scrutiny and regulation. They were forced to modify Windows to yank out dependencies on IE. More important, they were forced to offer users a choice of browsers during installation. There was no user demand for any of this, because the majority of users never even knew there was an issue.
If you look at things strictly from the point of government regulation, the market was "free" before the anti-trusts moves, and what Microsoft did was simply sharp-elbowed competition. That's a principled position, if one not shared by most people. You can argue, purely on market theory grounds, that Microsoft's moves were, in the long run, going to leave an opening for competitors to take the market from them. What you *can't* argue is that IE6's decline *proves* that pure market competition would have been sufficient - because it wasn't pure market competition that did the trick.
A meaningless comparison. Key length is one of those obvious things - after all, it's just a number and bigger is clearly better, right? - that leads people astray all the time. The thing to keep in mind is that what matters is not the *number of bits in the key*, it's the number of possible distinct keys. If I told you "I use AES-256 for absolute security, but it's easy for me to remember the key: I only choose keys between 1 and 1000" - well, that's obviously not very secure: You can guess my key in at most 1000 tries!
For a system like AES, every possible 128 (or 192 or 256) bit combination is a valid key. The strength of the system (against a brute force attack!) can be read directly off the number of bits. No conceivable computer will ever be able to attack a 256-bit key, and personally I cannot imagine a situation where a 128-bit key could be brute-forced.
For a system like RSA, only very special combinations of bits correspond to valid keys. An AES key is just a bunch of bits, while an RSA key, as a number, has to be product of exactly two prime numbers in a particular range, with special properties to boot. Even then, there would be too many values to try in a pure brute force fashion- but because of the necessary mathematical properties of an RSA key, no one does that. Instead, they use more efficient techniques that rely on those mathematical properties. A 1024 bit RSA key requires about as much computational effort as an 80-bit AES-like key. That's why the current recommendation is for at least 2048 bits (roughly the equivalent of 112 AES-like bits), though that's considered pushing it a bit. To get to the equivalent of a 128-bit AES key, you need a 3072-bit RSA key; to match AES-256, you need a 15360-bit RSA key! Such keys actually get used today. In 2005, if you combine published estimates, experts were predicting that 1024-bit RSA should be phased out by 2010 (though high-value uses should move faster). OK, so half way through that period, *one* 1024-bit RSA key was broken ... though in fact even that isn't true. (Breaking an RSA key amounts to factoring a large number into its two constituent primes. What the link points to was a successful factorization of a very specially chosen number - 2^1039-1 - for which even better mathematical techniques are known. Even so, it took the equivalent of 100 years of computer time. An indication that it was time to move on from 1024-bit keys? Absolutely. A practical "break" for massive numbers of RSA keys? Not quite.
An alternative to RSA is elliptic curve crypto (ECC), which has the same public-key properties but can use many more possible combinations of bits in a key, so can get by with dramatically shorter keys. In fact, to get the ECC equivalent of n-bit AES, you need 2n-bit ECC.
See "Second Circuit Strikes Down “Hot News” Injunction Against Flyonthewall — but Will the Misappropriation Tort Become More Invasive?" http://pubcit.typepad.com/clpblog/2011/06/second-circuit-strikes-down-hot-news-injunction-against-fl yonthewall-but-will-the-misappropriation-t.html for a note of caution about this ruling. Briefly, Paul Alan Levy is concerned that the ruling ignored First Amendment and other broad issues and narrowly focused on the detailed fact pattern. It explicitly did *not* strike down the Hot News doctrine. As a result, it may have little precedential value - and leaves Hot News like "fair use", something that no court has fully pinned down so that you never know for sure where the boundaries like.
It's certainly true that courts consider it a virtue to rule as narrowly as possible, and only answer questions actually asked by a particular case. But we also need broader principles to emerge so that people can have reasonable certainty of how new, but not completely novel, cases will be treated. If Levy is correct, we're going to have to see more Hot News cases decided before we really know where we stand.
Each WPA association in shared secret (PSK) mode uses a unique session key, which is computed using the shared secret (the password you enter), the MAC addresses of the two ends of the association, and two random numbers, once generated at each end. So if you enable WPA but with password "password" - or anything else - each individual device will actually use a different encryption key in conversations with the access point.
*If* someone is monitoring at the time the association is set up, they will get all the data needed to compute the actual session key. But if they weren't able to see the establishment of the association, they can't derive the key. So, unlike the case with non-encrypted connections, just being able to converse with the access point doesn't mean you can read all its traffic. In fact, you can only read your own.
Now, this is not a very robust kind of protection. One attack against an existing connection is to interfere with it in any of a variety of ways, forcing it to be re-established - at a time when you are presumably monitoring, Still, it's better than nothing - and it's sufficient to render connections opaque to Firesheep.
A convention some people are following is to give the network a name that tells you what password to use. Of course, you can (depending on the exact circumstances) simply tell people what the password is - or put up a sign with that information.
For all the effort and money, the Times's paywall doesn't even work right. My wife ran into it last weekend. She'd apparently reached her limit of 20 stories. We're actually long-time paper subscribers, so "just had to log in". Except it wouldn't work. She actually called the Times support line. They had us clear cookies, log out, and log back in again. That worked for one or two stories - at which point it kicked in again.
Fortunately, for Safari uses, there's a *built in workaround*: Just click the "Reader" button.
There are so many levels on which the Times just doesn't get it. They pissed off a very long time subscriber, wasted support costs (non-trivial, if you look at general industry costs) for *two* calls, the second of which was just a complaint that it didn't work - and ended up with another person who now knows how to get around their code when she needs to.
The Federal False Claims Act - allowing anyone to file a suit "on behalf of the Federal Government" against a Federal contractor for frauds against the government, and then share in any award - has been on the books since 1863 and has been used and upheld through this day. It was actually broadened in 1986 and 2009. So there's by no means an absolute bar to Congress's ability to allow private citizens to act when the government itself has failed to.
How that will play out in this case - exactly where a court will draw the lines on what Congress can and cannot delegate - I have no clue. But it's certainly not obvious that Congress can't allow anyone to help enforce the Patent Marking restrictions.
It's complicated
Building the antennas and such is interesting because their scale is on the order of features on chips. So you actually build your antennas right on the chip with the electronics. I saw some samples which were cool: Some of the classic antenna designs, etched on a chip. But you can actually do much better by designing metamaterials.
Complicated stuff, but it was in the "engineering characterization" phase (i.e., we know that it works, now we need to figure out how to do it practically) 4 or 5 years ago. It's coming.
-- Jerry/div>
Re: Re: Scratching the surface
You might find the following article of interest:
http://www.nytimes.com/2016/08/09/technology/rural-electrical-cooperatives-turn-to-the-inte rnet.html
-- Jerry/div>
Scratching the surface
We currently subdivide the frequency spectrum into bands and assign each band to an exclusive use and user within a geographical area. This way of dealing with the spectrum was introduced early in the 20th century to avoid interference: Given the modulation mechanisms available at the time, it was the only approach. Of course, it also lead to a whole visualization of spectrum as "real estate" that could be "owned" and had to be protected.
With SDR, this view of things is unnecessarily limiting - immensely so. Modulation techniques can share spectrum with minimal interference - spread spectrum effectively smears a narrow, tall signal at one "spot" in the spectrum over a much broader but but shorter signal. If two "tall" signals overlap, both are wiped out; if parts of two (or more) "shorter" signals partially overlap, each gets a tiny bit noisier but (up to a point) they both get through. USB (Ultra Wide Band) spread spectrum takes this idea to an extreme (and is used today).
On top of this, "cognitive radio" relies on protocols in which transmitters "skip themselves into the conversation" where they find silent spots. This is how WiFi works today - it's why you can have large numbers of WiFi devices using a pretty narrow piece of spectrum "simultaneously" without interfering (up to a point; everything has some limit).
In the old days, geographical areas were large - typically on the order of large cities - as the high power in on narrow, tall signal carried a long way at the frequencies in use, so transmitters had to be far from each other to avoid interference. Cellular phones are an example of how modern technologies can use much small geographical areas (WiFi and Bluetooth use even small ones). The fact that the same frequencies are assigned across much larger areas - hundreds, even many thousands, of cells - is a product of history and politics, not a necessity of the technology,
Finally, the move to ever-higher frequency bands - which can carry data at ever-higher rates, which implies the need for much lower power to carry the same amount of data (the power needed goes down as the square of the available bandwidth) opens up way more possibilities.
The "spectrum crunch" we face today is more a product of legacy technologies and regulations than of physics, as the Telco's want you to believe. Massive re-engineering will be needed - and the changes will be fought tooth and nail by the incumbents, who will see their "spectrum real estate", in which they've invested fortunes, dissolving away. But the time will come.
Another interesting point to consider: Spread spectrum techniques were originally developed by the military for two reasons: They are resistant to jamming, but they are also difficult to detect. A spread spectrum signal based on a cryptographic spreader, properly run, is visible only as slightly increased noise. If it's low power - all that's needed in a geographically limited piece of a mesh - you have to be very close to even notice that. So the traditional methods of finding and shutting down "rogue signals" don't work well against this kind of technology.
But the real control that the FCC (and analogous regulators the world over) have wielded over the spectrum in the last four decades or so is through regulation of the hardware. The old days of people throwing together radios from parts faded with the newer technologies and higher frequencies even by the 1980's. Stuff moved on chips - eventually, it became impossible for anything but chips to do the job. While it's possible to put together your own police scanner, say, very few people were in a position to do it. Regulate what the few hardware makes are allowed to build and you can keep the vast majority of people out of the police bands if you want.
We just saw a manifestation of this in the regulation of 5GHz WiFi radio controls. You can change some things - but power and frequency is locked down in the hardware, and it's impractical to build your own.
SDR changes all that. It allows you to use stock components - DtoA and AtoD converters - driven by software to implement all that stuff which used to be in easily-controlled hardware. "How are you going to keep them down on the farm after they've seen the big city."
The easy tight regulation of the electromagnetic spectrum that's defined the last hundred years is going to dissolve. There will be battles exactly like the copyright battles we see today. There will be huge technological winners - and losers. But the wireless world 20, 30 years from now will be very, very different.
-- Jerry/div>
Re: IP
(b) If it does run IP, it can have private addresses inside of it, with the correspondence between internal and external addresses known only to the mesh itself - NAT at the scale of the mesh
(c) You might as well build the thing using IPv6 and use self-assigned, rotating IP's - with, again, the addresses only known within the mesh. IPv6 addresses are large enough that you don't have to worry about collisions.
Keep in mind that you don't typically "own" an IP today. Rather, you ISP lends you one so that it can route packets to you. Thus, the ISP knows the correspondence between you as a customer and your IP address at any given time. In the case of a mesh network, it's the mesh itself that acts as your ISP. It, in turn, has to connect to the Internet somewhere - but it has its own address for the mesh as a whole; it doesn't externalize your address. A packet arrives at the mesh and then is somehow (details "TBD") routed through the mesh to you. Only the mesh need know who you are.
(Of course, you could build the mesh with IP "straight through", in which your IP would indeed identify "you". Some meshes will probably be built that way. But it's not the only alternative.)
-- Jerry/div>
Location, location, location
In traditional lawsuits, the plaintiff gets to pick the location, as long as they can make a reasonable argument that they have a presence there. A low bar - hence all the patent lawsuits in Texas based on having an empty office with the patent troll's name on the door. If the traditional rules apply to these small claims, you better believe the trolls will have "offices" as inconvenient and expensive for defendants to reach as possible.
It's not clear how to fix this. You can bias things by letting the defendant pick, but that doesn't work when the plaintiff is "the little guy" - e.g., DMCA 512(f) complaints, or an individual writer suing a mega-corp that's copied his work.
Traditional small-claims courts almost exclusively deal with suits brought by individuals or small business, usually involving local, physical transactions with other individuals or small businesses; if a large corporation is involved, it's probably a defendant and the "plaintiff chooses location" rule works. Copyright lawsuits usually go the other way around, so the analogy breaks down./div>
Safari
Find the certificate you want to remove - CNNIC ROOT is right there - and double-click on it. Details about the certificate will appear. Click the arrow next to "Trust" to open the trust details. Change "When using this certificate" from "Use System Defaults" to "Never Trust".
It's not possible to delete one of the built-in certificates, at least not using the Keychain Access application. (There is a command line utility that can do it, but even then the removal isn't permanent, and the cert may reappear - though it will be marked "Never Trust".)
-- Jerry/div>
(untitled comment)
No more. You can get all these features - plus others, like automatic removal of images of stray fingers and conversion from image to searchable text - for a few hundred dollars. The current leader on price - if not necessarily functionality - is available on Amazon for $272 - http://www.amazon.com/piQx-Xcanex-Portable-Document-Scanner/dp/B00DFWCCXS/ref=sr_1_2?ie=UTF8&qid =1396735497&sr=8-2&keywords=book+scanner
Look around a bit and you can find many similar devices at reasonable prices. The only thing that's still quite expensive is automatic page turning - but you can probably live without that. And the prices are only going to drop further, and the software will only get better.
The result of this is that publication in paper form will be likely publishing music on a CD: Soon, huge numbers of people will be able to make a DRM-free "eBook" version with very little time, money, or effort expended - and it'll probably look better than most eBook versions. The files involved are small - you could probably scan every book published in English over a year onto a USB stick.
An obvious response will be to try to stop publishing on paper. But that won't work - an eBook reader's screen is, if anything, easier to scan than a paper book. Just push the next page button. There may be calls to put artificial limits on how fast the page button can be pushed, but given that people skim books to look for material, the reader makers will resist. Maybe readers with LCD screens can be set up to make it hard on the scanners, though I have my doubts. For the e-paper readers - no hope, the screens are just passive displays almost all the time; try to play games like having the letters move around all the bit all the time and the battery will give out quickly.
These publishers are the walking dead. They just don't know it yet.
-- Jerry/div>
(untitled comment)
This is an opportunity that shouldn't be missed./div>
Re: Re:
Of course, this assumes the usual arrangement of government legislative and executive functions. New York City's government structures are incredible baroque - plus there's an overlay of rights and responsibilities that you would expect to belong to the City, but that the State legislature has chosen to take control of for various (typically really bad) reasons. So I have no idea whether it would even be possible for the City Council to control how City money is spent.
All that said: Bloomberg has, in general, been a very good mayor. He has a couple of blind spots, which for some reason have become more and more visible as his tenure approaches its end. Leaders all have their limitations. Rudolph Guilliani was a much more problematic figure with many more odd blind spots, but not only was he also a pretty successful mayor, but he was certainly the mayor the city needed after 9/11 - at least *shortly* after 9/11, when everyone looked for leadership and a sense that the world and the City would somehow survive and recover.
Given New York City's size and complexity, an effective mayor who doesn't have some strange sides to him ... probably can't exist.
-- Jerry/div>
You guys have missed it all
You're all a bunch of filthy hippies and Communists and traitors who don't appreciate what this great country of ours - not yours - has done for you. If you don't like it, why don't you just get out and go live in one of those workers' paradises you dream of, like Sweden or Eurasia or something.
-- A True Patriot from Iowa or Idaho or one of those places/div>
Breaking hashes is missing the point
There are protocols (SRP http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol is the most prominent example) in which having full access to the data on the server doesn't permit you to imitate a client (without additional work to brute-force the actual password), Unfortunately, such protocols aren't trivial to retrofit into existing systems as they require significant computation on the client side, so they haven't seen much traction. Perhaps it's time to consider them.
-- Jerry/div>
(untitled comment)
Northshire has been there through all the changes. The store expanded a couple of years ago, and is, in total footprint, comparable to a medium-sized B&N. But since it started life in what was probably once a hotel, and added rooms here and there, it doesn't *feel* huge - it feels like a bunch of rooms. And it specializes in providing an experience: Knowledgable, friendly staff; regular talks by authors; special displays by local writers; little notes on the shelves from bookstore employees describing their favorites; etc.
Northshire has had a book printing machine (I don't know who makes it) for a couple of years. They also started selling on-line a while back.
I have no connection with the store, other than as a long-time customer: We vacation nearby a couple of times a year, and as a family tradition always include a visit to Northshire on every trip. The store has been busy every time we've been in there. And we always leave with a large collection of new books. In fact, we used to have Borders near us at home. We realized that we almost never went there: Books we needed "quickly" came from Amazon; books that were the result of browsing came from Northshire. Borders just kind of faded from our lives (though we do miss knowing it's there).
I hope Northshire's external appearance of success don't conceal inner dry rot. It's been a tough couple of years for many of the Manchester vendors, and they haven't had to deal with the technological changes in the book business./div>
(untitled comment)
Advertising is a game of statistics. No matter what medium you use, most ads will be completely ignored by the vast majority of people who see them. That's a reality that businesses have had to accept for years - there's even an old joke to the effect that "Half my advertising money is wasted. If only I knew which half!" In fact, if you consider an ignored ad "wasted money", the fraction "wasted" is over 99%. The only way to judge whether advertising is actually effective is to compare profit - income minus expenses, where expenses includes ads - for different amounts of advertising. Not easy to do, but most businesses have concluded that "wasting" money on ads is actually a worthwhile investment.
Because the actual fraction of ads that pay off is so small, it takes only a tiny increase in the absolute number to make a big difference. Suppose you ran an ad that a million people saw, and it brought in 100 customers you wouldn't have otherwise bought your product. (In most situations, that would be an incredibly successful campaign.) Now suppose targeting doubles that to 200 new customers.
Initially, of the million people who saw the ads, 100 thought they were relevant, while 999,900 thought they were not. With targeting, 999,800 thought they were not. So if you ask those who received the ads whether they got relevant ads, you'll reach the conclusion that an absolutely overwhelming percentage did not. Obviously, targeting *doesn't work*.
And yet, doubling the number of customers the ad brings in is an impossibly high improvement. Advertisers would kill to get it. More to the point - they'd pay a great deal of money to whoever could deliver numbers like that. Obviously, targeting *does* work!
-- Jerry/div>
(untitled comment)
One person they spoke to is certain that this will soon change - that within 5 years, there might be limits on what data Target can collect, and there will certainly be limits on what they can sell. I pretty sure that's true.
By the way, it's important to keep in mind that Target - and every other company that does targeted advertising - always emphasizes that the positive value to the customer in delivering ads that describe products they might actually need. But that's of course not why the companies are doing this. The point is to sell more products. In fact, it's pointed out in these articles that the reason Target is so interested in discovering pregnant women and marketing to them is that it's long been known that young families tend to "bond" to certain brands (and likely places to buy them). Get them early and they'll keep coming back. Those early special discounts will be repaid many times over by full-price purchases.
Is there something wrong with this? Probably not, but just as there's a line where "clever" becomes "creepy", there's a line where "attractive" becomes "manipulative". It's never clear where the line is until after you've crossed it.
Many years ago, the group I worked for at a large company moved to a brand new facility. There was a committee that helped in designing various amenities, like the cafeteria. One decision was on the color. A consultant on the matter recommended (I think) yellow, because studies had shown that people bought more food in a yellow cafeteria. OK … but who is going with yellow good for? It's certainly good for the company running the cafeteria; but for everyone else working there, probably not. Whenever you see clever ideas like data mining to find pregnant women … ask yourself: Cui bono? Who benefits?
-- Jerry/div>
Cause and Effect?
It's very nice to say that "the free market" eliminated Microsoft IE's near-monopoly on browsers - but it's a misreading of history. No real "free market" existed, either before or after the change.
*Before* the change, Microsoft made a number of monopolistic moves. Windows represented virtually the entire PC market, and was impossible to attack: The deal offered to hardware vendors by Microsoft was "If you want a reasonable price for Windows, you have to agree to pay *per unit shipped*, whether a unit has Windows on it or not." Windows itself had IE6 embedded. You could, if you were technically adept, install another browser - but IE6 had to stay, because various other pieces of Windows (deliberately) relied on it.
*After* the change, Microsoft was under government scrutiny and regulation. They were forced to modify Windows to yank out dependencies on IE. More important, they were forced to offer users a choice of browsers during installation. There was no user demand for any of this, because the majority of users never even knew there was an issue.
If you look at things strictly from the point of government regulation, the market was "free" before the anti-trusts moves, and what Microsoft did was simply sharp-elbowed competition. That's a principled position, if one not shared by most people. You can argue, purely on market theory grounds, that Microsoft's moves were, in the long run, going to leave an opening for competitors to take the market from them. What you *can't* argue is that IE6's decline *proves* that pure market competition would have been sufficient - because it wasn't pure market competition that did the trick.
-- Jerry/div>
Re: Re: Re:
For a system like AES, every possible 128 (or 192 or 256) bit combination is a valid key. The strength of the system (against a brute force attack!) can be read directly off the number of bits. No conceivable computer will ever be able to attack a 256-bit key, and personally I cannot imagine a situation where a 128-bit key could be brute-forced.
For a system like RSA, only very special combinations of bits correspond to valid keys. An AES key is just a bunch of bits, while an RSA key, as a number, has to be product of exactly two prime numbers in a particular range, with special properties to boot. Even then, there would be too many values to try in a pure brute force fashion- but because of the necessary mathematical properties of an RSA key, no one does that. Instead, they use more efficient techniques that rely on those mathematical properties. A 1024 bit RSA key requires about as much computational effort as an 80-bit AES-like key. That's why the current recommendation is for at least 2048 bits (roughly the equivalent of 112 AES-like bits), though that's considered pushing it a bit. To get to the equivalent of a 128-bit AES key, you need a 3072-bit RSA key; to match AES-256, you need a 15360-bit RSA key! Such keys actually get used today. In 2005, if you combine published estimates, experts were predicting that 1024-bit RSA should be phased out by 2010 (though high-value uses should move faster). OK, so half way through that period, *one* 1024-bit RSA key was broken ... though in fact even that isn't true. (Breaking an RSA key amounts to factoring a large number into its two constituent primes. What the link points to was a successful factorization of a very specially chosen number - 2^1039-1 - for which even better mathematical techniques are known. Even so, it took the equivalent of 100 years of computer time. An indication that it was time to move on from 1024-bit keys? Absolutely. A practical "break" for massive numbers of RSA keys? Not quite.
An alternative to RSA is elliptic curve crypto (ECC), which has the same public-key properties but can use many more possible combinations of bits in a key, so can get by with dramatically shorter keys. In fact, to get the ECC equivalent of n-bit AES, you need 2n-bit ECC.
-- Jerry/div>
A note of caution
It's certainly true that courts consider it a virtue to rule as narrowly as possible, and only answer questions actually asked by a particular case. But we also need broader principles to emerge so that people can have reasonable certainty of how new, but not completely novel, cases will be treated. If Levy is correct, we're going to have to see more Hot News cases decided before we really know where we stand.
-- Jerry/div>
(untitled comment)
*If* someone is monitoring at the time the association is set up, they will get all the data needed to compute the actual session key. But if they weren't able to see the establishment of the association, they can't derive the key. So, unlike the case with non-encrypted connections, just being able to converse with the access point doesn't mean you can read all its traffic. In fact, you can only read your own.
Now, this is not a very robust kind of protection. One attack against an existing connection is to interfere with it in any of a variety of ways, forcing it to be re-established - at a time when you are presumably monitoring, Still, it's better than nothing - and it's sufficient to render connections opaque to Firesheep.
A convention some people are following is to give the network a name that tells you what password to use. Of course, you can (depending on the exact circumstances) simply tell people what the password is - or put up a sign with that information.
-- Jerry/div>
...and it doesn't even work reliably
Fortunately, for Safari uses, there's a *built in workaround*: Just click the "Reader" button.
There are so many levels on which the Times just doesn't get it. They pissed off a very long time subscriber, wasted support costs (non-trivial, if you look at general industry costs) for *two* calls, the second of which was just a complaint that it didn't work - and ended up with another person who now knows how to get around their code when she needs to.
-- Jerry/div>
Federal False Claims Act?
How that will play out in this case - exactly where a court will draw the lines on what Congress can and cannot delegate - I have no clue. But it's certainly not obvious that Congress can't allow anyone to help enforce the Patent Marking restrictions.
-- Jerry
/div>More comments from leichter >>
Techdirt has not posted any stories submitted by leichter.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt