Browse Definitions :
Definition

Googlebot

Contributor(s): Matthew Haughn

Googlebot is a web crawling software search bot (also known as a spider or webcrawler) that gathers the web page information used to supply Google search engine results pages (SERP).

Googlebot collects documents from the web to build Google’s search index. Through constantly gathering documents, the software discovers new pages and updates to existing pages. Googlebot uses a distributed design spanning many computers so it can grow as the web does.

The webcrawler uses algorithms to determine what sites to browse, what rates to browse at and how many pages to fetch from. Googlebot begins with a list generated from previous sessions. This list is then augmented by the sitemaps provided by webmasters. The software crawls all linked elements in the webpages it browses, noting new sites, updates to sites and dead links. The information gathered is used to update Google’s index of the web.

Googlebot creates an index within the limitations set forth by webmasters in their robots.txt files. Should a webmaster wish to keep pages hidden from Google search, for example, he can block Googlebot in a robots.txt file at the top-level folder of the site. To prevent Googlebot from following any links on a given page of a site, he can include the nofollow meta tag; to prevent the bot from following individual links, the webmaster can add rel="nofollow" to the links themselves.

A site’s webmaster might detect visits every few seconds from computers at google.com, showing the user-agent Googlebot. Generally, Google tries to index as much of a site as it can without overwhelming the site’s bandwidth. If a webmaster finds that Googlebot is using too much bandwidth, they can set a rate on Google’s search console homepage that will remain in effect for 90 days.

Presenting at the 2011 SearchLove conference, Josh Giardino claimed that Googlebot is actually the Chrome browser. That would mean that Googlebot has not only the ability to browse pages in text, as crawlers do, but can also run scripts and media as web browsers do. That capacity could allow Googlebot to find hidden information and perform other tasks that are not acknowledged by Google. Giardino went so far as to say that Googlebot may be the original reason that the company created Chrome.

This was last updated in June 2017

Continue Reading About Googlebot

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

SearchCompliance

  • risk assessment

    Risk assessment is the identification of hazards that could negatively impact an organization's ability to conduct business.

  • PCI DSS (Payment Card Industry Data Security Standard)

    The Payment Card Industry Data Security Standard (PCI DSS) is a widely accepted set of policies and procedures intended to ...

  • risk management

    Risk management is the process of identifying, assessing and controlling threats to an organization's capital and earnings.

SearchSecurity

SearchHealthIT

SearchDisasterRecovery

  • call tree

    A call tree is a layered hierarchical communication model that is used to notify specific individuals of an event and coordinate ...

  • Disaster Recovery as a Service (DRaaS)

    Disaster recovery as a service (DRaaS) is the replication and hosting of physical or virtual servers by a third party to provide ...

  • cloud disaster recovery (cloud DR)

    Cloud disaster recovery (cloud DR) is a combination of strategies and services intended to back up data, applications and other ...

SearchStorage

Close