Component of a search engine that gathers listings by automatically “crawling” the Web. A search engine’s crawler (also known as a Spider or robot) follows links to Web Pages. It makes copies of those pages and stores them in a search engine’s index. Understanding how this crawler perceives the page, which is a different perspective than a human visiting the same page, is crucial to SEO strategy.