Internet Logic

Soon, this effort was overwhelmed. From the original military logic that encouraged its inception, the Internet enhanced (almost to delirium) the idea of playing without center fixed, without borders, to keep "alive" in case one of its components succumb. As perhaps no other technology has come in human history, the Internet was developed at a ferocious speed and bewildering chaos + Maps Very soon, the site was transformed into an ocean. How do I know? How virtual guide through the maze? Arose, then the web search engines. Huge databases that tell the browser where to find information of interest, the search engines Devin "the doors." Without them, who were drawn to the work of indexing other sites, perhaps virtual space would have been impassable. The search engines rely on a specific program type, called a web crawler (also, web spider or web robot). A web crawler runs on a virtual space and permanent insomniac. (At this point, while someone reads these lines, thousands of crawlers are active.) Unlike a traditional database, where people draw near specific information relevant to a particular case, the expansion of Web search engines is blind, automatic.

The reason is clear: to develop a comprehensive index of the web exceed, exceed, and probably will continue to exceed the possibilities of a single person or even a crowd. The crawlers are traveling, they encounter a website, stop, sniff it, explore it, drop all the available information and then return to their barracks. Then, the challenge of the search engines are indexing, the best way possible, all that information and have it ready for when a user requests to know about this or that subject. .