Protect Your Site with a Blackhole for Bad Bots
One of my favorite security measures here at Perishable Press is the site’s virtual Blackhole trap for bad bots. The concept is simple: include a hidden link to a
robots.txt-forbidden directory somewhere on your pages. Bots that ignore or disobey your robots rules will crawl the link and fall into the honeypot trap, which then performs a WHOIS Lookup and records the event in the blackhole data file. Once added to the blacklist data file, bad bots immediately are denied access to your site.
- Live Demo
- How to Install
- Caveat Emptor
- Whitelist Good Bots
- License & Disclaimer
- Questions & Feedback
I call it the “one-strike” rule: bots have one chance to follow the robots.txt protocol, check the site’s robots.txt file, and obey its directives. Failure to comply results in immediate banishment. The best part is that the Blackhole only affects bad bots: normal users never see the hidden link, and good bots obey the robots rules in the first place. So the percentage of false positives is extremely low to non-existent. It’s an ideal way to protect your site against bad bots silently, efficiently, and effectively.
With a few easy steps, you can set up your own Blackhole to trap bad bots and protect your site from evil scripts, bandwidth thieves, content scrapers, spammers, and other malicious behavior.
The Blackhole is built with PHP, and uses a bit of
htaccess to protect the blackhole directory. Refined over the years and completely revamped for this tutorial, the Blackhole consists of a plug-&-play
/blackhole/ directory that contains the following three files:
.htaccess– protects the log file
blackhole.dat– log file
index.php– blackhole script
These three files work together to create the Blackhole for Bad Bots. If you are running WordPress, the Blackhole plugin is recommended instead of this standalone PHP version.
The Blackhole is developed to make implementation as easy as possible. Here is an overview of the steps:
- Upload the
/blackhole/directory to your site
- Edit the four variables in the “EDIT HERE” section in
- Ensure writable server permissions for the
- Add a single line to the top of your pages to include the
- Add a hidden link to the
/blackhole/directory in the footer
- Forbid crawling of
/blackhole/by adding a line to your robots.txt
So installation is straightforward, but there are many ways to customize functionality. For complete instructions, jump ahead to the installation steps. For now, I think a good way to understand how it works is to check out a demo..
I have set up a working demo of the Blackhole for this tutorial. It works exactly like the download version, but it’s set up as a sandbox, so when you trigger the trap, it blocks you only from the demo itself. Here’s how it works:
- First visit to the Blackhole demo loads the trap page, runs the whois lookup, and adds your IP address to the blacklist data file
- Once your IP is added to the blacklist, all future requests for the Blackhole demo will be denied access
So you get one chance (per IP address) to see how it works. Once you visit the demo, your IP address will be blocked from the demo only — you will still have full access to this tutorial (and everything else at Perishable Press). So with that in mind, here is the demo link (opens new tab):
Visit once to see the Blackhole trap, and then again to observe that you’ve been blocked. Again, even if you are blocked from the demo page, you will continue to have access to everything else here at Perishable Press.
How to Install
Here are complete instructions for implementing the PHP/standalone of Blackhole for Bad Bots. Note that these steps are written for Apache servers running PHP. The steps are the same for other PHP-enabled servers (e.g., Nginx, IIS), but you will need to replace the .htaccess file and rules with whatever works for particular server environment. Note: for a concise summary of these steps, check out this tutorial.
Step 1: Download the Blackhole zip file, unzip and upload to your site’s root directory. This location is not required, but it enables everything to work out of the box. To use a different location, edit the
include path in Step 4.
Step 2: Edit the four variables in the “EDIT HERE” section in
Step 3: Change file permissions for
blackhole.dat to make it writable by the server. The permission settings may vary depending on server configuration. If you are unsure about this, ask your host. Note that the blackhole script needs to be able to read, write, and execute the
Step 4: Include the Blackhole script by adding the following line to the top of your pages (e.g.,
<?php include(realpath(getenv('DOCUMENT_ROOT')) . '/blackhole/index.php'); ?>
The Blackhole script checks the bot’s IP address against the blacklist data file. If a match is found, the request is blocked with a customizable message. View the source code for more information.
Step 5: Add a hidden link to the
/blackhole/ directory in the footer of your site’s web pages (replace “Your Site Name” with the name of your site):
<a rel="nofollow" style="display:none" href="https://example.com/blackhole/" title="Do NOT follow this link or you will be banned from the site!">Your Site Name</a>
This is the hidden trigger link that bad bots will follow. It’s currently hidden with CSS, so 99.999% of visitors won’t ever see it. Alternately, to hide the link from users without relying on CSS, replace the anchor text with a transparent 1-pixel GIF image. For example:
<a rel="nofollow" style="display:none" href="http://example.com/blackhole/" title="Do NOT follow this link or you will be banned from the site!"><img src="/images/1px.gif" alt=""></a>
Remember to edit the link
href value and the image
src to match the correct locations on your server.
Step 6: Finally, add a
Disallow directive to your site’s
User-agent: * Disallow: /blackhole/
This step is pretty important. Without the proper robots directives, all bots would fall into the Blackhole because they wouldn’t know any better. If a bot wants to crawl your site, it must obey the rules! The robots rule that we are using basically says, “All bots DO NOT visit the
/blackhole/ directory or anything inside of it.” So it is important to get your robots rules correct.
Step 7: Done! Remember to test thoroughly before going live. Also check out the section on customizing for more ideas.
You can verify that the script is working by visiting the hidden trigger link (added in step 5). That should take you to the Blackhole warning page for your first visit, and then block you from further access on subsequent visits. To verify that you’ve been blocked entirely, try visiting any other page on your site. To restore site access at any time, you can clear the contents of the
blackhole.dat log file.
Important: Make sure that all of the rules in your robots.txt file are correct and have proper syntax. For example, you can use the free robots.txt validator in Google Webmaster Tools (requires Google account).
The previous steps will get the Blackhole set up with default configuration, but there are some details that you may want to customize:
index.php(lines 25–28): Edit the four variables as needed
index.php(lines 140–164): Customize markup of the warning page
index.php(line 180): Customize the list of whitelisted bots
These are the recommended changes, but the PHP is clean and generates valid HTML, so feel free to modify the markup or anything else as needed.
If you get an error letting you know that a file cannot be found, it could be an issue with how the script specifies the absolute path, using
getenv('DOCUMENT_ROOT'). That function works on a majority of servers, but if it fails on your server for whatever reason, you can simply replace it with the actual path. From Step 4, the include script looks like this:
<?php include(realpath(getenv('DOCUMENT_ROOT')) . '/blackhole/index.php'); ?>
So if you are getting not-found or similar errors, try this instead:
So that would be the actual absolute path to the blackhole
index.php file on your server. As long as you get the path correct, it’s gonna fix any “file can’t be found” type errors you may be experiencing.
If in doubt about the actual full absolute path, consult your web host or use a PHP function or constant such as
__DIR__ to obtain the correct infos. And check out my tutorial over at WP-Mix for more information about including files with PHP and WordPress.
Blocking bots is serious business. Good bots obey
robots.txt rules, but there may be potentially useful bots that do not. Yahoo is the perfect example: it’s a valid search engine that sends some traffic, but sadly the Yahoo Slurp bot is too stupid to follow the rules. Since setting up the Blackhole several years ago, I’ve seen Slurp disobey robots rules hundreds of times.
By default, the Blackhole DOES NOT BLOCK any of the big search engines. So Google, Bing, and company always will be allowed access to your site, even if they disobey your
robots.txt rules. See the next section for more details.
Whitelist Good Bots
In order to ensure that all of the major search engines always have access to your site, Blackhole whitelists the following bots:
Additionally, popular social media services are whitelisted, as well as some other known “good” bots. To whitelist these bots, the Blackhole script uses regular expressions to ensure that all possible name variations are allowed access. For each request made to your site, Blackhole checks the User Agent and always allows anything that contains any of the following strings:
a6-indexer, adsbot-google, ahrefsbot, aolbuild, apis-google, baidu, bingbot, bingpreview, butterfly, cloudflare, duckduckgo, embedly, facebookexternalhit, facebot, googlebot, ia_archiver, linkedinbot, mediapartners-google, msnbot, netcraftsurvey, outbrain, pinterest, quora, rogerbot, showyoubot, slackbot, slurp, sogou, teoma, tweetmemebot, twitterbot, uptimerobot, urlresolver, vkshare, w3c_validator, wordpress, wp rocket, yandex
So any bot that reports a user agent that contains any of these strings will NOT be blocked and always will have full access to your site. To customize the list of whitelisted bots, open
index.php and locate the function
blackhole_whitelist(), where you will find the list of allowed bots.
The upside of whitelisting these user agents ensures that anything claiming to be a major search engine is allowed open access. The downside is that user-agent strings are easily spoofed, so a bad bot could crawl along and say, “Hey look, I’m teh Googlebot!” and the whitelist would grant access. It is your decision where to draw the line.
With PHP, it is possible to verify the true identity of each bot, but doing so consumes significant resources and could overload the server. Avoiding that scenario, the Blackhole errs on the side of caution: it’s better to allow a few spoofs than to block any of the major search engines and other major web services.
License & Disclaimer
Questions & Feedback
Questions? Comments? Send ’em via my contact form. Thanks!
Here you can download the latest version of Blackhole for Bad Bots. By downloading, you agree to the terms.
I came across this topic, years ago right here:
# a lot of bots show into the useragent the googlebot string – thats why people thing that your script blocks the googlebot. Usally it doesnt need to be whitelisted.
# to only check for the useragent is bad in this case. Google explains how to verify your their bot correctly over there: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html
# the best way to install this script is to add it to the robots.txt + wait a day to add the link somewhere in a hidden
Laserpointa from Protecus Forum
This is awesome! I’m experimenting around with it, turning it into an automated WP plugin, and I’ve managed to ban myself about 5 times from my site! lol
Doesn’t seem to be working for me. After installation I’m getting no error, but in attempting to get myself banned by repeatedly visiting the banned page, nothing happens, and the dat file never gets written to. I’ve changed permissions to allow full access to everything (just to test), and still nothing gets written.
I receive the “Bad Bot” email, the “bad bot” page displays fine, and there are no errors reporting in my error log.
Figured it out.. nm.
Now, can you make a black hole for all the spam posters to my blog. The ones that post…really love the way you write and your content is so good, never though along these lines before but you enlightened me….on my art blog with only pictures of my art or on my about page.
I would love to develop a list and block them from ever even trying to post.
is there any “dynamic” for this instead of manually listing and typing into the file?
Is it ok that the .dat file has got some IPs inside? And what about the concerns of prefetching and search engines antispam features?
found your Site via stumble upon. very nice and informative.
Keep up the good work. It has really touched me.
Maybe take a look at http://www.spider-trap.de/en_index.html
It’s pretty amusing to read all the discussion about how to avoid banning GoogleBot. If they’re crawling a page that you’ve explicitly requested they ignore, then you should treat them like every other crawler. I don’t see the point of breaking your own rule in this case.
Eric, that’s the 2,000-pound gorilla in the room. Why bother whitelisting any search-engine? If they break the rules, ban them. Right? Unfortunately, Google owns the Web, so they pretty much decide what it is exactly that they will and won’t do. Sucks, but true.
If you read the url Tom has posted, you will notice that Google simply wants to be called explicitly and not via the wildcard. That way it obeys the robots.txt, according to the author of that site. :)