Fall Sale! Code FALL2024 takes 25% OFF our Pro Plugins & Books »
Web Dev + WordPress + Security

Better Robots.txt Rules for WordPress

[ Better Robots.txt Rules for WP ] Cleaning up my files during the recent redesign, I realized that several years had somehow passed since the last time I even looked at the site’s robots.txt file. I guess that’s a good thing, but with all of the changes to site structure and content, it was time again for a delightful romp through robots.txt.

This post summarizes my research and gives you a near-perfect robots file, so you can copy/paste completely “as-is”, or use a template to give you a starting point for your own customization.

Robots.txt in 30 seconds

Primarily, robots directives disallow obedient spiders access to specified parts of your site. They can also explicitly “allow” access to specific files and directories. So basically they’re used to let Google, Bing et al know where they can go when visiting your site. You can also do nifty stuff like instruct specific user-agents and declare sitemaps. For just a simple text file, robots.txt wields considerable power. And we want to use whatever power we can get to our greatest advantage.

Better robots.txt for WordPress

Running WordPress, you want search engines to crawl and index your posts and pages, but not your core WP files and directories. You also want to make sure that feeds and trackbacks aren’t included in the search results. It’s also good practice to declare a sitemap. With that in mind, here are the new and improved robots.txt rules for WordPress:

User-agent: *
Disallow: /wp-admin/
Disallow: /trackback/
Disallow: /xmlrpc.php
Disallow: /feed/
Allow: /wp-admin/admin-ajax.php
Sitemap: https://example.com/sitemap.xml

Only one small edit is required: change the Sitemap to match the location of your sitemap (or remove the line if no sitemap is available).

Important: As of version 5.5, WordPress automatically generates a sitemap for your site. For more information check out this in-depth tutorial on WP Sitemaps.

I use this exact code on nearly all of my major sites. It’s also fine to customize the rules, say if you need to exclude any custom directories and/or files, based on your actual site structure and SEO strategy.

Usage

To add the robots rules code to your WordPress-powered site, just copy/paste the code into a blank file named robots.txt. Then add the file to your web-accessible root directory, for example:

https://perishablepress.com/robots.txt

If you take a look at the contents of the robots.txt file for Perishable Press, you’ll notice an additional robots directive that forbids crawl access to the site’s blackhole for bad bots. Let’s have a look:

User-agent: *
Disallow: /wp-admin/
Disallow: /trackback/
Disallow: /xmlrpc.php
Disallow: /feed/
Disallow: /blackhole/
Allow: /wp-admin/admin-ajax.php
Sitemap: https://perishablepress.com/wp-sitemap.xml

Spiders don’t need to be crawling around anything in /wp-admin/, so that’s disallowed. Likewise, trackbacks, xmlrpc, and feeds don’t need to be crawled, so we disallow those as well. Also, notice that we add an explicit Allow directive that allows access to the WordPress Ajax file, so crawlers and bots have access to any Ajax-generated content. Lastly, we make sure to declare the location of our sitemap, just to make it official.

Notes & Updates

Update! The following directives have been removed from the tried and true robots.txt rules in order to appease Google’s new requirements that googlebot always is allowed complete crawl access to any publicly available file.

Disallow: /wp-content/
Disallow: /wp-includes/

Because /wp-content/ and /wp-includes/ include some publicly accessible CSS and JavaScript files, it’s recommended to just allow googlebot complete access to both directories always. Otherwise you’ll be spending valuable time chasing structural and file name changes in WordPress, and trying to keep them synchronized with some elaborate set of robots rules. It’s just easier to allow open access to these directories. Thus the two directives above were removed permanently from robots.txt, and are not recommended in general.

Apparently Google is so hardcore about this new requirement1 that they actually are penalizing sites (a LOT) for non-compliance2. Bad news for hundreds of thousands of site owners who have better things to do than keep up with Google’s constant, often arbitrary changes.

  • 1 Google demands complete access to all publicly accessible files.
  • 2 Note that it may be acceptable to disallow bot access to /wp-content/ and /wp-includes/ for other (non-Google) bots. Do your research though, before making any assumptions.

Previously on robots.txt..

As mentioned, my previous robots.txt file went unchanged for several years (which just vanished in the blink of an eye). The previous rules proved quite effective, especially with compliant spiders like googlebot. Unfortunately, it contains language that only a few of the bigger search engines understand (and thus obey). Consider the following robots rules, which were used here at Perishable Press way back in the day.

Important! Please do not use the following rules on any live site. They are for reference and learning purposes only. For live sites, use the Better robots.txt rules, provided in the previous section.
User-agent: *
Disallow: /mint/
Disallow: /labs/
Disallow: /*/wp-*
Disallow: /*/feed/*
Disallow: /*/*?s=*
Disallow: /*/*.js$
Disallow: /*/*.inc$
Disallow: /transfer/
Disallow: /*/cgi-bin/*
Disallow: /*/blackhole/*
Disallow: /*/trackback/*
Disallow: /*/xmlrpc.php
Allow: /*/20*/wp-*
Allow: /press/feed/$
Allow: /press/tag/feed/$
Allow: /*/wp-content/online/*
Sitemap: https://perishablepress.com/sitemap.xml

User-agent: ia_archiver
Disallow: /

Apparently, the wildcard character isn’t recognized by lesser bots, and I’m thinking that the end-pattern symbol (dollar sign $) is probably not well-supported either, although Google certainly gets it.

These patterns may be better supported in the future, but going forward there is no reason to include them. As seen in the “better robots” rules (above), the same pattern-matching is possible without using wildcards and dollar signs, enabling all compliant bots to understand your crawl preferences.

Learn more..

Check out the following recommended sources to learn more about robots.txt, SEO, and more:

About the Author
Jeff Starr = Designer. Developer. Producer. Writer. Editor. Etc.
USP Pro: Unlimited front-end forms for user-submitted posts and more.

106 responses to “Better Robots.txt Rules for WordPress”

  1. Rick Beckman 2011/02/10 12:57 pm

    WordPress has a hook to modify the robots.txt data programmatically, I think. Would be nice to have this as a plugin that could be updated as you improve the method. A more advanced plugin would allow for turning rules on & off as desired, adding custom rules, etc.

    • Jeff Starr 2011/02/10 3:02 pm

      That’s a great idea, I wish I had the time!

      Now that I think about it, I think there is already a plugin that does this to some degree, but if not somebody should definitely do it.

    • Peter Wilson 2011/02/10 3:04 pm

      Yep, it’s the do_robots hook.

      I add a few rules using a standard plugin I put in the must use directory (/wp-content/mu-plugins/). I might add a few more from Jeff’s post above once I’ve had the chance to consider it.

      It’s tuned for a WordPress Network but I’ve added it to pastebin anyway. http://wordpress.pastebin.com/j9W2JYTr

  2. Not so sure that blocking the feed is a great move. Google is generally pretty good at parsing feed content.

    • Jeff Starr 2011/02/10 2:49 pm

      It’s a close call, with duplicate content vs having your feed indexed. Unless feed content is different than the site content, blocking /feed/ is a good move because it preserves page rank and keeps the focus on the site.

      You’ll see in my previous robots file that I allowed the main feed to be indexed. These days however, I’m trying to keep duplicate content down to a minimum.

      • I really doubt that. Google crawls RSS feeds for Google Reader and it knows it’s RSS or ATOM. It even finds your RSS feeds and allows you to add them as sitemaps in Google Webmaster Tools.

        And that’s why there is rel="alternate" in the link to the feed (in head).

      • Jeff Starr 2011/02/11 8:02 am

        Perhaps, but eliminating duplicate content in the search index should take precedence over a bit of convenience in the Webmaster Tools area.

        Also, rel="alternate" is meaningless if your feed content is identical to your blog content, which is the case 99.99% of the time.

  3. Yeah, but it’s often duplicate content and you’d prefer someone land on the html article than an xml feed from a search query.

    There are other considerations, of course, but that’s typically why I disallow feeds.

  4. Jeff

    Now that’s a clean and mean robots.txt file!

    Glad you eventually buried the paranoia that must have gripped you back in the day ;)

    UR so a dude, dude.

  5. iceflatline 2011/02/15 5:00 pm

    Great post. Could you elaborate on why you use Disallow: ?wptheme= I’ve not seen that one before. Is it a directive specific to your particular theme?

    • Jeff Starr 2011/02/15 5:14 pm

      Yes, the ?wptheme= string is for the WP Theme Switch plugin. The goal of course to keep duplicate versions of your site out of the search index.

  6. iceflatline 2011/02/16 6:17 am

    Jeff, thank you. May I ask one more question? What is the rationale for disallowing xmlrpc.php? My understanding is the it is an API primarily for remote publishing so I am unclear on what a crawler would glean from it? Thanks in advance for your thoughts…

    • As far as I know, there is no reason the xmlrpc.php file needs to be crawled and indexed. The API is there for scripts and apps to work with directly. Disallowing robots access in no way affects the xmlrpc functionality.

  7. Yael K. Miller 2011/02/21 9:55 am

    Why disallow sitemap? Isn’t the whole point of the sitemap so spiders can crawl it?

    • Nope, you want spiders to crawl your canonical content, posts, pages, and etc.

      Unless feed content is different than the site content, blocking /feed/ is a good move because it preserves page rank and keeps the focus on the site.

  8. Yael K. Miller 2011/02/21 10:36 am

    Thanks.

  9. Yael K. Miller 2011/02/21 10:42 am

    What’s /wp-content/online/ that you allow it?

  10. Yael K. Miller 2011/02/21 11:20 am

    If you disallow ia_archiver how is it that according to the SearchStatus Firefox addon, you still have an Alexa rating?

  11. Yael K. Miller 2011/02/21 11:58 am

    What should the file permission for robots.txt be?

  12. Hi Jeff, thanks for this great post, I have already modified my robots.txt file according to this post. thanks a lot.

Comments are closed for this post. Something to add? Let me know.
Welcome
Perishable Press is operated by Jeff Starr, a professional web developer and book author with two decades of experience. Here you will find posts about web development, WordPress, security, and more »
Banhammer: Protect your WordPress site against threats.
Thoughts
I disabled AI in Google search results. It was making me lazy.
Went out walking today and soaked up some sunshine. It felt good.
I have an original box/packaging for 2010 iMac if anyone wants it free let me know.
Always ask AI to cite its sources. Also: “The Web” is not a valid answer.
All free plugins updated and ready for WP 6.6 dropping next week. Pro plugin updates in the works also complete :)
99% of video thumbnail/previews are pure cringe. Goofy faces = Clickbait.
RIP ICQ
Newsletter
Get news, updates, deals & tips via email.
Email kept private. Easy unsubscribe anytime.