
With the Internet growing at an astounding rate, it is getting more and more difficult to get a high page ranking. There are hundreds of dynamics that help engines decide how to rank a page. Knowing how some of these dynamics work, will definitely help you increase your sites exposure on the Internet. One of the dynamics that you need to take into consideration is “meta robots”. Believe it or not, but you can actually control the robots that crawl your sites in a number of ways, which will undoubtedly help your ranking.
Meta robots are unique to each page on a website, and are used by search engines to direct crawlers on a single page. For the most part you want as many crawlers as possible to access your site; however, there are certain instances where you may not want pages or specific areas of your site being crawled. Robot tags will allow you to keep control of what is being scanned by search engines, and preventing them from returning search query results, while still being able to transfer their link value to other pages.
Why would you want to block a page from being scanned, you’re wondering? Here are a few reasons: for security, you may want to protect private information such as contact information, and addresses. Block spam bots from trying to collect sensitive information. You can also block duplicate content, which is a serious SEO problem, and user-generated content, that can not be vouched for. An example of user-generated content would be comments and reviews posted on your site or blog from regular users.
For the most part, you will use the default values of meta name=”robots” content=”index, follow”, which is the same thing as having no meta robots tag at all. For duplicate pages you will want to use meta name=”robots” content=”noindex, follow”, basically you do not want the duplicate page indexed, but you want the links on your page to hold value. With user-generated content, meta name=”robots” content=”index, nofollow”, would be appropriate. The page will still be indexed, but the links will hold no value. When trying to protect private information, meta name=”robots” content=”noindex, nofollow”, would be used to block everything.
To get started you will need to create a robots.txt file, that lists all your pages you want to use the meta robot tag for. Than insert the appropriate tags mentioned above on each page. Every spider or bot that enters your site should read the robot.txt file which will tell them what not to index and where not to go on your website.