Distributed Search Engines, And Why We Need Them In The Post-Snowden World

from the easier-said-than-done dept

One of the many important lessons from Edward Snowden's leaks is that centralized services are particularly vulnerable to surveillance, because they offer a single point of weakness. The solution is obvious, in theory at least: move to decentralized systems where subversion of one node poses little or no threat to the others. Of course, putting this into practice is not so straightforward. That's especially true for search engines: creating distributed systems that are nonetheless capable of scaling so that they can index most of the Web is hard. Despite that challenge, distributed search engines do already exist, albeit in a fairly rudimentary state. Perhaps the best-known is YaCy:

YaCy is a free search engine that anyone can use to build a search portal for their intranet or to help search the public internet. When contributing to the world-wide peer network, the scale of YaCy is limited only by the number of users in the world and can index billions of web pages. It is fully decentralized, all users of the search engine network are equal, the network does not store user search requests and it is not possible for anyone to censor the content of the shared index. We want to achieve freedom of information through a free, distributed web search which is powered by the world's users.

...

The resulting decentralized web search currently has about 1.4 billion documents in its index (and growing -- download and install YaCy to help out!) and more than 600 peer operators contribute each month. About 130,000 search queries are performed with this network each day.
Another is Faroo, which has an interesting FAQ that includes this section explaining why even privacy-conscious non-distributed search engines are problematic:
Some search engines promise privacy, and while they look like real search engines, they are just proxies. Their results don't come from their own index, but from the big incumbents (Google, Bing, Yahoo) instead (the query is forwarded to the incumbent, and the results from incumbent are relayed back to the user).

Not collecting logfiles (of your ip address and query) and using HTTPS encryption at the proxy search engine doesn't help if the search is forwarded to the incumbent. As revealed by Edward Snowden the NSA has access to the US based incumbents via PRISM. If the search is routed over a proxy (aka "search engine") the IP address logged at the incumbent is that from the proxy and not from the user. So the incumbent doesn't have the users IP address, and the search engine proxy promises not to log/reveal the user IP, while HTTPS prevents eavesdropping on the way from the user to the search engine proxy.

Sounds good? By observing the traffic between user and search engine proxy (IP and time and size are not protected by HTTPS) via PRISM, Tempora (GCHQ taps world's communications) et al. and combining that with the traffic between search engine proxy and the incumbent (query, time, size are accessible by PRISM), all those seemingly private and protected information can be revealed. This is a common method know as Traffic analysis.

The NSA system XKeyscore allows to recover search engine keywords and other communication just by observing connection data (meta data) and combining them with the backend data sourced from the the incumbents. The system is also used by the German intelligence services BND and BfS. Neither the encryption with HTTPS, nor the use of proxies, nor restricting the observation to meta data is protecting your search queries or other communication content.
Unfortunately, unlike YaCy, Faroo is not open source, which means that its code can't be audited -- an essential pre-requisite in the post-Snowden world. Another distributed search engine that is fully open source is Scholar Ninja, a new project from Jure Triglav:
I’ve started building a distributed search engine for scholarly literature, which is completely contained within a browser extension: install it from the Chrome Web Store. It uses WebRTC and magic, and is currently, like, right now, used by 42 people. It’s you who can be number 43. This project is 20 days old and early alpha software; it may not work at all.
As that indicates, Scholar Ninja is domain-specific at the moment, although presumably once the technology is more mature it could be adapted for other uses. It's also very new -- barely a month old at the time of writing -- and very small-scale, which shows that distributed search has a long way to go before it becomes mainstream. Given the serious vulnerabilities of traditional search engines, that's a pity. Let's hope more people wake up to the need for a completely new approach, and start to help create it.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: distributed computing, privacy, search engines


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 8 Jul 2014 @ 2:18am

    I agree. And I think such search engines will start being used soon, even if their search results aren't as relevant as Google's, but offer other advantages, such as not being censored or being more privacy-friendly.

    In the end, if billions of people index all pages, it could get better than Google, too. The power of the crowd vs a single entity.

    link to this | view in chronology ]

    • icon
      Ninja (profile), 8 Jul 2014 @ 3:19am

      Re:

      Google could build into their own system the power to provide results from the YaCy network for instance (while helping them). When a takedown notice comes they can say "sorry, we can't take it down, it's beyond our power. Maybe if the single entity joins the crowd it can empower such crowd even more.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 8 Jul 2014 @ 4:22am

        Re: Re:

        Nice idea, but Google is already using a database of links to be blocked, and would therefore be able to, and expected to filter results that they are obtaining from elsewhere and passing on to users. This is the big problem with censorship mechanisms, once implemented at a choke point they can filter everything passing through the choke point.

        link to this | view in chronology ]

  • icon
    Ninja (profile), 8 Jul 2014 @ 3:17am

    YaCy was pretty crappy a while back but maybe with the scale it's usable now. In any case since I've upgraded my connection I've been assisting them and I truly hope they become mainstream.

    Distributed solutions are the future.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Jul 2014 @ 4:15am

    Unfortunately there seem to be several problems that will significantly limit the utility of peer to peer search engines. Ranking algorithms require access to large parts of the index, such as a count of all unique links to a page. Also users want to search the whole index for a given term. If this requires them to find and connect to thousands of nodes there is the recursive problem of finding the nodes, and the problem of managing thousands of connections, or keeping track of pending responses to UDP requests.
    Note, almost file sharing systems rely on a centralized index to allow searching and finding peers. In essence finding torrents is a smaller scale search problem, and although the actual file transfer is done on a decentralized basis, finding the file is usually centralized. File sharers are more aware than most people of the hazards of centralized systems, and include many programmers in their ranks, and are still struggling to come up with a way of decentralizing the search to avoid the problems of trackers blocked and domains being seized. This is a significantly easier problem to solve, as the indexes are much smaller, than a full index of all of the Internet that is publicly available.

    link to this | view in chronology ]

    • icon
      Whatever (profile), 8 Jul 2014 @ 4:46am

      Re:

      Agreed. The biggest poison for any search engine is "SEO" people knowing exactly what to do to rank well. Once they know that, they will repeat it as many times as needed to totally dominate results and render the searches effectively worthless.

      A system where the ranking process is open source is pretty much doomed to an early death, as the results will be almost entirely spam within hours of it reaching a reasonable level of user searches.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 8 Jul 2014 @ 6:09am

        Re: Re:

        You have a comprehension fail, as the point I was making was that it is very difficult to do search optimization with a distributed index. This makes it difficult to rank results, and one consequence of this would be that SEO does not work, except possible if all sites influencing one node are used, and then it only affects users of that node.

        link to this | view in chronology ]

      • icon
        Gwiz (profile), 8 Jul 2014 @ 6:49am

        Re: Re:

        The biggest poison for any search engine is "SEO" people knowing exactly what to do to rank well. Once they know that, they will repeat it as many times as needed to totally dominate results and render the searches effectively worthless.

        With YaCy the user controls the ranking, since it's done at the client. The user also controls their own blacklist of results. I've been running a YaCy node for over a year and really have had to blacklist only two entities - one was a porn link spammer and the other was an annoying link farm without any actual content.


        A system where the ranking process is open source is pretty much doomed to an early death, as the results will be almost entirely spam within hours of it reaching a reasonable level of user searches.

        Not at all. YaCy doesn't seem to be useless because of spam at all. I'm migrating to using YaCy almost exclusively now since I have it set up to crawl based on what I search for, my results are very relevant to me.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 8 Jul 2014 @ 11:50am

          Re: Re: Re:

          Don't feed the troll. If this article was about how puppies and rainbows, the troll would find a way to shit on it.

          link to this | view in chronology ]

    • identicon
      Anonymous Coward, 8 Jul 2014 @ 5:25am

      Re:

      Ranking algorithms require access to large parts of the index, such as a count of all unique links to a page. Also users want to search the whole index for a given term. If this requires them to find and connect to thousands of nodes there is the recursive problem of finding the nodes, and the problem of managing thousands of connections, or keeping track of pending responses to UDP requests.


      You know, Google has the same problems. Did you really think Google's search engine is centralized? It's not, it's distributed between thousands of nodes, each one having only part of the index. So things like computing the ranking and distributing the queries are already known to be solved.

      What Google has that is centralized is trust. Google's nodes know they can trust other nodes, which simplifies things. File sharing systems usually do not have that trust. This leads to the most visible problem with decentralized search: nodes returning faked (usually spam) results.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 8 Jul 2014 @ 6:00am

        Re: Re:

        There is a huge difference between a server farm and the Internet when it comes to connecting thousands of nodes, and thousands of disks, a huge number of switches allowing for massive parallelism in connections, and access to storage. Any node can, and usually will have several connections to several networks, and the ability to optimize algorithms to maximize network locality for connections. Also, within such a farm, latency is much much lower than using the Internet.
        The huge difference between a super-computer or server farm and the Internet is the communications bandwidth available to the system, by several orders of magnitude, aided by specialized networking support at each node, like an ability to bypass the Kernel when accessing the network, and to use a local network addressing scheme to link nodes within the system. When comparing performance, all the nodes in a big barn is a centralized system compared to having the nodes spread all around the world.

        link to this | view in chronology ]

    • icon
      Ninja (profile), 8 Jul 2014 @ 6:51am

      Re:

      Tbh there is a tool to search the torrent files themselves (or the hash whatever) using DHT. I've yet to use it so I can't attest to its efficiency.

      I'd say that it is feasible or will be in a matter of a few years or even months. It may take a few more seconds instead of nonoseconds as it does on a standard search engine but that's a price I wouldn't mind paying. As for fake/spammy nodes there are tools to handle them already. On bittorrent for instance bad nodes get isolated and eventually ignored in the swarm (they were forced into such measures due to the MAFIAA poisoning swarms) so the technology is there. There will be tradeoffs for sure but it can be achieved.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Jul 2014 @ 5:48am

    Unfortunately there seem to be several problems that will significantly limit the utility of peer to peer search engines. Ranking algorithms require access to large parts of the index,


    Isn't that Bigcouch was supposed to solve?

    http://bigcouch.cloudant.com/

    Also Google Omega Cluster does the same thing, it doesn't matter where the server is in the world, they all act like one big machine.

    https://research.google.com/pubs/pub41684.html

    link to this | view in chronology ]

    • identicon
      Anonymous Coward III, 8 Jul 2014 @ 6:32am

      Re:

      TO me both Anonymous Coward I & II have missed the basic problem.

      Any system that can be manipulated will be manipulated until results are completely useless to users.

      Google has the best search engine; Yahoo has the second best; all others are worse than Google and Yahoo.

      Google search results are manipulated by filters.

      Some of these filters remove what government and pressure group consider to be inappropriate material; others manipulate what is appropriate as deemed by commercial interest; all filters except those imitated by the end users manipulated what the end user is allowed to see in an endless process of censorship.


      The problem is that Google does not produce the end users desired results while recording the end user every action which can then be manipulated for the betterment of others.

      Google and search engines need to be replaced by something that produces the results the end user wants without te constant surveillance.

      link to this | view in chronology ]

  • identicon
    Kenneth Michaels, 8 Jul 2014 @ 5:51am

    Paying the peers

    I've seen the idea of Torcoin, a bitcoin-like protocol to reward those who provide bandwidth to a Tor network. Perhaps we need YaCyCoin to reward those who provide index and bandwidth to the distributed search engine.

    Of course, I have no idea on how to do that.

    link to this | view in chronology ]

  • identicon
    private frazer, 8 Jul 2014 @ 5:58am

    we contribute 5% of hardward for Distibutive programs

    Search, social media, email. make it all distributive and kill google and facebook etc.. anyone with a pipe to NS/GCHQ

    link to this | view in chronology ]

  • icon
    Gwiz (profile), 8 Jul 2014 @ 7:14am

    YaCy Tips

    Some tips and tricks I've learned to make YaCy run better:

    - Increase the RAM setting. Default is 600MB. I have a 4GB so I give YaCy 1 GB (1200MB). I would give more if this was dedicated node, but since it's my laptop, 1GB seems to play nice with other stuff that's I'm running.

    - Limit crawl maximum. Default is 6000 PPM (pages per minute) and that is pretty large. I share my internet connection with other people and devices so I limit it to 300 PPM so I don't hog all the bandwidth and piss anyone off.

    - Increase language ranking to 15 (max). I tend to like reading stuff in English, but that's just me.

    - Turn on Heuristics settings so it automatically crawls one level deep on every page returned in the results. This way if you do a search and the results kind of suck - wait ten minutes, do the search again and the results are better because it was "learning" about what you just searched for.

    I also turn on the "site operator shallow crawl". When you enter a search query in the format "site:somewebsite.com" it automatically crawls that site one level deep.

    link to this | view in chronology ]

  • icon
    Vidiot (profile), 8 Jul 2014 @ 8:36am

    I used to use distributed search engines. One was called AltaVista, one was called AskJeeves, and there were these other up-and-comers called Google and Yahoo. Seldom saw the same results; and, devoid of AI algorithms, you could search for literal phrases, booleans and directory paths. Ahhh, the good ol' days...

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Jul 2014 @ 8:55am

    Yet more evidence that this website is nothing but a front for google.

    link to this | view in chronology ]

  • icon
    toyotabedzrock (profile), 8 Jul 2014 @ 12:39pm

    Where is the index for distributed search stored?

    link to this | view in chronology ]

    • icon
      Gwiz (profile), 8 Jul 2014 @ 1:21pm

      Re:

      Where is the index for distributed search stored?


      For YaCy it's a DHT (distributed hash table) and it's stored and shared in little bits and pieces from each user's hard drive.

      Basically, it's "stored" the same way a torrent is "stored" in the swarm.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 8 Jul 2014 @ 3:38pm

        Re: Re:

        I think that this approach is naive, in that the Internet is far larger that most people can conceive. All the works published by the labels, studios and book publishers are but a pebble on the beach when compared to all the web pages that exist on the Internet, and this size is multiplied several times when public email archives are added to the indexes, and multiply several more times if you wish to include individual tweets.
        Lets look at a grain of sand on the beach of the Internet, a Google search for Barak Obama gives :-
        About 58,900,000 results (0.30 seconds)
        that is almost 2Gb of data just for the links, assuming 30 characters per link. If you want a descriptive paragraph, ala Google, that would be more like 40-50 GB of data. Through in the rest of the Indexes needed to support more refined searches, and that is looking a several hundred GB just to do a decent index for one man. When distributed to user level machines, that part of the index could be spread over several hundred machines. Start scaling up the Internet, and tens of millions of machines are likely required, which makes finding which machines to query a major search in its own right.

        link to this | view in chronology ]

        • icon
          Gwiz (profile), 9 Jul 2014 @ 6:52am

          Re: Re: Re:

          YaCy's DHT index only stores what they term a Reverse Word Index (RWI). The entries only associate a word with url's that contain that word.

          When you search, the client receives a list of url's that contain your search word from your own index and your peers. It then verifies that word is on each of the resulting url's pages and creates the snippets at that point. The snippets aren't saved anywhere in the index. Yes, this approach adds some time when waiting for results, but it assures the resulting pages exist and removes bad links from the index.

          YaCy seems to be scaling up just fine with over 350 thousand words and almost 2 billion url's currently.

          link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.