Methods Employed to Avoid Google Indexing
Have you at any time desired to stop Google from indexing a distinct URL on your internet web-site and exhibiting it in their lookup motor effects webpages (SERPs)? If you deal with world wide web websites extended sufficient, a day will probably come when you need to know how to do this.
The a few solutions most normally utilized to avert the indexing of a URL by Google are as follows:
Working with the rel=”nofollow” attribute on all anchor factors utilized to link to the web page to prevent the one-way links from being followed by the crawler.
Working with a disallow directive in the site’s robots.txt file to protect against the page from currently being crawled and indexed.
Applying the meta robots tag with the material=”noindex” attribute to avoid the web page from currently being indexed.
When google reverse index in the 3 ways show up to be refined at initially glance, the success can range greatly depending on which system you select.
Making use of rel=”nofollow” to protect against Google indexing
Quite a few inexperienced webmasters try to protect against Google from indexing a particular URL by working with the rel=”nofollow” attribute on HTML anchor features. They insert the attribute to each anchor aspect on their site employed to url to that URL.
Like a rel=”nofollow” attribute on a connection helps prevent Google’s crawler from next the hyperlink which, in turn, prevents them from exploring, crawling, and indexing the concentrate on web page. Even though this strategy may possibly perform as a brief-term alternative, it is not a viable lengthy-term resolution.
The flaw with this method is that it assumes all inbound one-way links to the URL will involve a rel=”nofollow” attribute. The webmaster, even so, has no way to prevent other net websites from linking to the URL with a followed website link. So the likelihood that the URL will eventually get crawled and indexed making use of this method is quite significant.
Using robots.txt to avoid Google indexing
One more typical approach utilised to stop the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be added to the robots.txt file for the URL in problem. Google’s crawler will honor the directive which will protect against the website page from getting crawled and indexed. In some situations, however, the URL can nonetheless show up in the SERPs.
Occasionally Google will screen a URL in their SERPs however they have in no way indexed the contents of that page. If sufficient web internet sites hyperlink to the URL then Google can normally infer the subject of the website page from the link textual content of those people inbound inbound links. As a end result they will clearly show the URL in the SERPs for connected lookups. Though utilizing a disallow directive in the robots.txt file will protect against Google from crawling and indexing a URL, it does not assure that the URL will by no means appear in the SERPs.
Working with the meta robots tag to stop Google indexing
If you need to have to prevent Google from indexing a URL though also stopping that URL from becoming shown in the SERPs then the most successful solution is to use a meta robots tag with a content material=”noindex” attribute in just the head element of the world-wide-web page. Of program, for Google to basically see this meta robots tag they need to initial be ready to learn and crawl the web page, so do not block the URL with robots.txt. When Google crawls the web page and discovers the meta robots noindex tag, they will flag the URL so that it will never ever be revealed in the SERPs. This is the most productive way to stop Google from indexing a URL and displaying it in their research effects.