Reference Terms
from Wikipedia, the free encyclopedia

Web crawler

A web crawler (also known as a web spider or web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner.

This process is called Web crawling or spidering.

Many legitimate sites, in particular search engines, use spidering as a means of providing up-to-date data.

Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches.

Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code.

Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).

Note:   The above text is excerpted from the Wikipedia article "Web crawler", which has been released under the GNU Free Documentation License.
Related Stories

Share This Page:

Computers & Math News
November 25, 2015

Latest Headlines
updated 12:56 pm ET