Search engines are constantly scanning the Internet to index web pages. Their web crawlers, also known as spiders or bots, analyze websites and their pages to determine how they should be indexed and stored for future reference.

The web crawlers follow site links and extract words from the source code, focusing primarily on titles, headings, body copy and special fields called meta tags. With the information the web crawlers gather, the search engines index and store the website content for subsequent searches.

When a user enters a keyword or phrase into a search engine, the engine examines its indexed content and provides a list of web pages that best match the search terms. Results typically include a short summary containing a page’s title, its description, meta tag information and often a snippet of text.