Links can have lots of attributes applied to them, but the engines ignore nearly all of these, with the important exception of the rel="nofollow" tag. In the example above, by adding the rel=nofollow attribute to the link tag, we've told the search engines that we, the site owners,do not want this link to be interpreted as the normal, "editorial vote."
Nofollow, taken literally, instructs search engines to not follow a link (although some do.) The nofollow tag came about as a method to help stop automated blog comment, guest book, and link injection spam (read more about the launch here), but has morphed over time into a way of telling the engines to discount any link value that would ordinarily be passed. Links tagged with nofollow are interpreted slightly differently by each of the engines, but it is clear they do not pass as much weight as normal "followed" links.
Although they don't pass as much value as their followed cousins, nofollowed links are a natural part of a diverse link profile. A website with lots of inbound links will accumulate many nofollowed links, and this isn't a bad thing. In fact, Moz's Ranking Factors showed that high ranking sites tended to have a higher percentage of inbound nofollowed links than lower ranking sites.
Google states that in most cases, they don't follow nofollowed links, nor do these links transfer PageRank or anchor text values. Essentially, using nofollow causes us to drop the target links from our overall graph of the web. Nofollowed links carry no weight and are interpreted as HTML text (as though the link did not exist). That said, many webmasters believe that even a nofollow link from a high authority site, such as Wikipedia, could be interpreted as a sign of trust.
Bing & Yahoo!
Bing, which powers Yahoo search results, has also stated that they do not include nofollowed links in the link graph. In the past, they have also stated nofollowed links may still be used by their crawlers as a way to discover new pages. So while they "may" follow the links, they will not count them as a method for positively impacting rankings.
Keywords are fundamental to the search process - they are the building blocks of language and of search. In fact, the entire science of information retrieval (including web-based search engines like Google) is based on keywords. As the engines crawl and index the contents of pages around the web, they keep track of those pages in keyword-based indices. Thus, rather than storing 25 billion web pages all in one database, the engines have millions and millions of smaller databases, each centered on a particular keyword term or phrase. This makes it much faster for the engines to retrieve the data they need in a mere fraction of a second.
Obviously, if you want your page to have a chance of ranking in the search results for "dog," it's wise to make sure the word "dog" is part of the indexable content of your document.
Keywords dominate our search intent and interaction with the engines. For example, a common search query pattern might go something like this:
When a search is performed, the engine matches pages to retrieve based on the words entered into the search box. Other data, such as the order of the words ("tanks shooting" vs. "shooting tanks"), spelling, punctuation, and capitalization of those keywords provide additional information that the engines use to help retrieve the right pages and rank them.
To help accomplish this, search engines measure the ways keywords are used on pages to help determine the "relevance" of a particular document to a query. One of the best ways to "optimize" a page's rankings is to ensure that keywords are prominently used in titles, text, and meta data.
Generally, the more specific your keywords, the better your chances of ranking based on less competition. The map graphic to the left shows the relevance of the broad term books to the specific title, Tale of Two Cities. Notice that while there are a lot of results (size of country) for the broad term, there are a lot less results and thus competition for the specific result.
Since the dawn of online search, folks have abused keywords in a misguided effort to manipulate the engines. This involves "stuffing" keywords into text, the url, meta tags and links. Unfortunately, this tactic almost always does more harm to your site.
In the early days, search engines relied on keyword usage as a prime relevancy signal, regardless of how the keywords were actually used. Today, although search engines still can't read and comprehend text as well as a human, the use of machine learning has allowed them to get closer to this ideal.
The best practice is to use your keywords naturally and strategically (more on this below.) If your page targets the keyword phrase "Eiffel Tower" then you might naturally include content about the Eiffel Tower itself, the history of the tower, or even recommended Paris hotels. On the other hand, if you simply sprinkle the words "Eiffel Tower" onto a page with irrelevant content, such as a page about dog breeding, then your efforts to rank for "Eiffel Tower" will be a long, uphill battle.
That said, keyword usage and targeting are still a part of the search engines' ranking algorithms, and we can leverage some effective "best practices" for keyword usage to help create pages that are close to "optimized." Here at Moz, we engage in a lot of testing and get to see a huge number of search results and shifts based on keyword usage tactics. When working with one of your own sites, this is the process we recommend:
- Use the keyword in the title tag at least once. Try to keep the keyword as close to the beginning of the title tag as possible. More detail on title tags follows later in this section.
- Once prominently near the top of the page.
- At least 2-3 times, including variations, in the body copy on the page - sometimes a few more if there's a lot of text content. You may find additional value in using the keyword or variations more than this, but in our experience, adding more instances of a term or phrase tends to have little to no impact on rankings.
- At least once in the alt attribute of an image on the page. This not only helps with web search, but also image search, which can occasionally bring valuable traffic.
- Once in the URL. Additional rules for URLs and keywords are discussed later on in this section.
- At least once in the meta description tag. Note that the meta description tag does NOT get used by the engines for rankings, but rather helps to attract clicks by searchers from the results page, as it is the "snippet" of text used by the search engines.
- Generally not in link anchor text on the page itself that points to other pages on your site or different domains (this is a bit complex - see this blog post for details).
Keyword Density Myth
Keyword density is not a part of modern ranking algorithms, as demonstrated in Dr. Edel Garcia The Keyword Density of Non-Sense.
If two documents, D1 and D2, consist of 1000 terms (l = 1000) and repeat a term 20 times (tf = 20), then a keyword density analyzer will tell you that for both documents Keyword Density (KD) KD = 20/1000 = 0.020 (or 2%) for that term. Identical values are obtained when tf = 10 and l = 500. Evidently, a keyword density analyzer does not establish which document is more relevant. A density analysis or keyword density ratio tells us nothing about:
- The relative distance between keywords in documents (proximity)
- Where in a document the terms occur (distribution)
- The co-citation frequency between terms (co-occurance)
- The main theme, topic, and sub-topics (on-topic issues) of the documents
Keyword density is divorced from content, quality, semantics, and relevancy.
What should optimal page density look like then? An optimal page for the phrase “running shoes” would thus look something like:
You can read more information about On-Page Optimization at this post.
The title tag of any page appears at the top of Internet browsing software, and is often used as the title when your content is shared through social media or republished.
Using keywords in the title tag means that search engines will "bold" those terms in the search results when a user has performed a query with those terms. This helps garner a greater visibility and a higher click-through rate.
The final important reason to create descriptive, keyword-laden title tags is for ranking at the search engines. In Moz's biannual survey of SEO industry leaders, 94% of participants said that keyword use in the title tag was the most important place to use keywords to achieve high rankings.
The title element of a page is meant to be an accurate, concise description of a page's content. It is critical to both user experience and search engine optimization.
As title tags are such an important part of search engine optimization, the following best practices for title tag creation makes for terrific low-hanging SEO fruit. The recommendations below cover the critical parts of optimizing title tags for search engine and usability goals.
Be mindful of length
Search engines display only the first 65-75 characters of a title tag in the search results. (After this length, the engines show an ellipsis - "..." to indicate when a title tag has been cut off) This is also the general limit allowed by most social media sites, so sticking to this limit is generally wise. However, if you're targeting multiple keywords (or an especially long keyword phrase) and having them in the title tag is essential to ranking, it may be advisable to go longer.
Place important keywords close to the front
The closer to the start of the title tag your keywords are, the more helpful they'll be for ranking and the more likely a user will be to click them in the search results.
At Moz, we love to end every title tag with a brand name mention, as these help to increase brand awareness, and create a higher click-through rate for people who like and are familiar with a brand. Sometimes it makes sense to place your brand at the beginning of the title tag, such as your homepage. Since words at the beginning of the title tag carry more weight, be mindful of what you are trying to rank for.
Consider readability and emotional impact
Title tags should be descriptive and readable. Creating a compelling title tag will pull in more visits from the search results and can help to invest visitors in your site. Thus, it's important to not only think about optimization and keyword usage, but the entire user experience. The title tag is a new visitor's first interaction with your brand and should convey the most positive impression possible.
Best Practices for Title Tags
Meta tags were originally intended to provide a proxy for information about a website's content. Several of the basic meta tags are listed below, along with a description of their use.
The Meta Robots tag can be used to control search engine spider activity (for all of the major engines) on a page level. There are several ways to use meta robots to control how search engines treat a page:
index/noindex tells the engines whether the page should be crawled and kept in the engines' index for retrieval. If you opt to use "noindex", the page will be excluded from the engines. By default, search engines assume they can index all pages, so using the "index" value is generally unnecessary.
follow/nofollow tells the engines whether links on the page should be crawled. If you elect to employ "nofollow," the engines will disregard the links on the page both for discovery and ranking purposes. By default, all pages are assumed to have the "follow" attribute.
Example: <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
noarchive is used to restrict search engines from saving a cached copy of the page. By default, the engines will maintain visible copies of all pages they indexed, accessible to searchers through the "cached" link in the search results.
nosnippet informs the engines that they should refrain from displaying a descriptive block of text next to the page's title and URL in the search results.
noodp/noydir are specialized tags telling the engines not to grab a descriptive snippet about a page from the Open Directory Project (DMOZ) or the Yahoo! Directory for display in the search results.
The X-Robots-Tag HTTP header directive also accomplishes these same objectives. This technique works especially well for content within non-HTML files, like images.
The meta description tag exists as a short description of a page's content. Search engines do not use the keywords or phrases in this tag for rankings, but meta descriptions are the primary source for the snippet of text displayed beneath a listing in the results.
The meta description tag serves the function of advertising copy, drawing readers to your site from the results and thus, is an extremely important part of search marketing. Crafting a readable, compelling description using important keywords (notice how Google "bolds" the searched keywords in the description) can draw a much higher click-through rate of searchers to your page.
Meta descriptions can be any length, but search engines generally will cut snippets longer than 160 characters, so it's generally wise to stay in these limits.
In the absence of meta descriptions, search engines will create the search snippet from other elements of the page. For pages that target multiple keywords and topics, this is a perfectly valid tactic.
Not as Important Meta Tags
The meta keywords tag had value at one time, but is no longer valuable or important to search engine optimization. For more on the history and a full account of why meta keywords has fallen into disuse, read Meta Keywords Tag 101 from SearchEngineLand.
Meta refresh, meta revisit-after, meta content type, etc.
Although these tags can have uses for search engine optimization, they are less critical to the process, and so we'll leave it to Google's Webmaster Tools Help to answer in greater detail -Meta Tags.
URLs, the web address for a particular document, are of great value from a search perspective. They appear in multiple important locations.
Since search engines display URLs in the results, they can impact click-through and visibility. URLs are also used in ranking documents, and those pages whose names include the queried search terms receive some benefit from proper, descriptive use of keywords.
URLs make an appearance in the web browser's address bar, and while this generally has little impact on search engines, poor URL structure and design can result in negative user experiences.
The URL above is used as the link anchor text pointing to the referenced page in this blog post.
Place yourself in the mind of a user and look at your URL. If you can easily and accurately predict the content you'd expect to find on the page, your URLs are appropriately descriptive. You don't need to spell out every last detail in the URL, but a rough idea is a good starting point.
Shorter is better
While a descriptive URL is important, minimizing length and trailing slashes will make your URLs easier to copy and paste (into emails, blog posts, text messages, etc) and will be fully visible in the search results.
Keyword use is important (but overuse is dangerous)
If your page is targeting a specific term or phrase, make sure to include it in the URL. However, don't go overboard by trying to stuff in multiple keywords for SEO purposes - overuse will result in less usable URLs and can trip spam filters.
The best URLs are human readable without lots of parameters, numbers and symbols. Using technologies like mod_rewrite for Apache and ISAPI_rewrite for Microsoft, you can easily transform dynamic URLs like this http://moz.com/blog?id=123 into a more readable static version like this: http://moz.com/blog/google-fresh-factor. Even single dynamic parameters in a URL can result in lower overall ranking and indexing.
Use hyphens to separate words
Not all web applications accurately interpret separators like underscore "_," plus "+," or space "%20," so use the hyphen "-" character to separate words in a URL, as in google-fresh-factor for URLs example above.
Duplicate content is one of the most vexing and troublesome problems any website can face. Over the past few years, search engines have cracked down on "thin" and duplicate content through penalties and lower rankings.
Canonicalization happens when two or more duplicate versions of a webpage appear on different URLs. This is very common with modern Content Management Systems. For example, you offer a regular version of a page and a "print optimized" version of the same content. Duplicate content can even appear on multiple websites. For search engines, this presents a big problem - which version of this content should they show to searchers? In SEO circles, this issue is often referred to as duplicate content - described in greater detail here.
The engines are picky about duplicate versions of a single piece of material. To provide the best searcher experience, they will rarely show multiple, duplicate pieces of content and thus, are forced to choose which version is most likely to be the original. The end result is ALL of your duplicate content could rank lower than it should.
Canonicalization is the practice of organizing your content in such a way that every unique piece has one and only one URL. If you leave multiple versions of content on a website (or websites), you might end up with a scenario like that to the right. Which diamond is the right one?
Instead, if the site owner took those three pages and 301-redirected them, the search engines would have only one,stronger page to show in the listings from that site.
The Canonical Tag to the Rescue!
A different option from the search engines, called the "Canonical URL Tag" is another way to reduce instances of duplicate content on a single site and canonicalize to an individual URL. This can also be used across different websites, from one URL on one domain to a different URL on a different domain.
Use the canonical tag within the page that contains duplicate content. The "target" of the canonical tag points to the "master" URL that you want to rank for.
<link rel=”canonical” href=”http://moz.com/blog”/>
This tells search engines that the page in question should be treated as though it were a copy of the URL http://moz.com/blog and that all of the link & content metrics the engines apply should flow back to that URL.
The Canonical URL tag attribute is similar in many ways to a 301 redirect from an SEO perspective. In essence, you're telling the engines that multiple pages should be considered as one (which a 301 does), without actually redirecting visitors to the new URL - often saving your development staff considerable heartache.
For more about different types of duplicate content, this post by Dr. Pete deserves special mention.
Ever see a 5 star rating in a search result? Chances are, the search engine received that information from rich snippets embedded on the webpage. Rich snippets are a type of structured data that allow webmasters to mark up content in ways that provide information to the search engines.
While the use of rich snippets and structured data is not a required element of search engine friendly design, its growing adoption means that webmasters who take advantage may enjoy an advantage in some circumstances.
Structured data means adding markup to your content so that search engines can easily identify what type of content it is. Schema.org provides several types of examples of data that can benefit from structured markup. These include people, products, reviews, businesses, recipes and events.
Often the search engines include structured data in search results, such as in the case of user reviews (stars) and author profiles (pictures.) There are several good resources for learning more about rich snippets online, including information at Schema.org and Google's Rich Snippet Testing Tool.
Rich Snippets in the Wild
Let's say you announce an SEO Conference on your blog. In regular HTML, your code might look like this:
Learn about SEO from experts in the field.<br/>
May 8, 7:30pm
Now, by structuring the data, we can tell the search engines more specific information about the type of data. The end result might look like this:
<div itemscope itemtype="http://schema.org/Event">
<div itemprop="name">SEO Conference</div>
<span itemprop="description">Learn about SEO from experts in the field.</span>
<time itemprop="startDate" datetime="2012-05-08T19:30">May 8, 7:30pm</time>
How scrapers steal your rankings
Unfortunately, the web is filled with hundreds of thousands (if not millions) of unscrupulous websites whose business and traffic models depend on plucking the content off other sites and re-using them (sometimes in strangely modified ways) on their own domains. This practice of fetching your content and re-publishing is called "scraping," and the scrapers make remarkably good earnings by outranking sites for their own content and displaying ads (ironically, often Google's own AdSense program).
When you publish content in any type of feed format - RSS/XML/etc - make sure to ping the major blogging/tracking services (like Google, Technorati, Yahoo!, etc.). You can find instructions for how to ping services like Google and Technorati directly from their sites, or use a service like Pingomatic to automate the process. If your publishing software is custom-built, it's typically wise for the developer(s) to include auto-pinging upon publishing.
Next, you can use the scrapers' laziness against them. Most of the scrapers on the web will re-publish content without editing, and thus, by including links back to your site, and the specific post you've authored, you can ensure that the search engines see most of the copies linking back to you (indicating that your source is probably the originator). To do this, you'll need to use absolute, rather that relative links in your internal linking structure. Thus, rather than linking to your home page using:
<a href="../>Home</a>You would instead use:
This way, when a scraper picks up and copies the content, the link remains pointing to your site.
There are more advanced ways to protect against scraping, but none of them are entirely foolproof. You should expect that the more popular and visible your site gets, the more often you'll find your content scraped and re-published. Many times, you can ignore this problem, but if it gets very severe, and you find the scrapers taking away your rankings and traffic, you may consider using a legal process called a DMCA takedown. Luckily, Moz's own in-house counsel, Sarah Bird, has authored a brilliant piece to help solve just this problem - Four Ways to Enforce Your Copyright: What to Do When Your Online Content is Being Stolen.