Robot Directives

Most search robots support on-page directives. If these are implemented correctly, search engines will almost always respect them.

Directive Explanation
noindex Crawl this document, but do not include it in the search engine’s index (or remove this document if it is already indexed).
nofollow Do not follow a specific link or all links within a document (depending on implementation) – this asks the search engine not to crawl the linked URL via any links discovered in this document.
noarchive Do not add this document to any full-text cache
noodp Do not use the description on the Open Directory Project (dmoz.org) when displaying this page on search result pages.
noydir Do not use the description from Yahoo Directory when displaying this page on search result pages.
nosnippet Do not show a description snippet when displaying this page on search result pages.
notranslate Do not offer to translate this document on search result pages.
noimageindex Do not index the images discovered within this document
unavailable_after Do not return this document on search result pages after a specific date/time.
none Equivalent to noindex,nofollow
all No restrictions. This is the default behaviour

Robots directives can be set in either the HTTP head or HTML header.

HTTP Head

Robots directives can be set in the HTTP head using the X-Robots-Tag header. This can either be set on one comma-separated line or over several lines:

HTTP/1.1 200 OK
...
X-Robots-Tag: noindex
X-Robots-Tag: nofollow,noodp

HTML Header

<meta name=“robots” content=“noindex,nofollow,noodp”>

The robots meta tag allows you to specify robot directives for all robots. You can also set directives specifically to googlebot using the following tag:

<meta name=“googlebot” content=“directives,separated,by,commas”>