Understanding web crawlers

Posted:  May 14th, 2018

 

A web crawler is a program or automated script that browses the World Wide Web in a methodical, automated manner. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches.

 

Search engines send out 'spiders', crawlers and robots to visit your site and gather web pages. When a robot visits a website it does one of two things;

 

    > It looks for the robots.txt file and the robots meta tag to see the "rules" that have been set forth.

 

or

 

    > It begins to index the web pages on your site.

 

 

Web search engines and some other sites use Web crawling or spidering software to update their web content or indices of others sites' web content. Web crawlers copy pages for processing by a search engine which indexes the downloaded pages so users can search more efficiently.

 

Crawlers consume resources on visited systems and often visit sites without approval. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. For instance, including a robots.txt file can request bots to index only parts of a website, or nothing at all.

 

The number of Internet pages is extremely large; even the largest crawlers fall short of making a complete index. For this reason, search engines struggled to give relevant search results in the early years of the World Wide Web, before 2000. Today relevant results are given almost instantly.

 

The robot then scans the visible text on the page, the content of various HTML tags and the hyperlinks listed on the page. It will then analyze the information and process it according to an algorithm devised by its owner. Depending on the search engine, the information is indexed and sent to the search engine's database.

 

Different search engines uses different robot as their web crawler. For example Yahoo uses Slurp as its web-indexing robot. Google uses googlebot as it robot and so on.