Protect Against Humans.txt Query-String Scans
I woke up this morning to the sound of thousands of 404 requests hitting the server. It’s sad that there are kiddies out there who have nothing better to do than buy some pathetic $50 script and then sit there like an imbecile harassing people for hours on end. But alas, that is the world we live in — fortunately it’s less than trivial to block the entire scan with just a few lines of good old .htaccess.
About the scans
Before getting to the code, a bit more about the scans that it protects against. Apparently some despearate lowlife released yet another script for scanning sites for vulnerabilities and exploit opportunities. So suddenly there are waves of scans probing for a vulnerability related to the following query-string parameter:
?whatever=http://www.google.com/humans.txt?
Unless this means something to your server, thousands of scans for it are simply wasting time, bandwidth, energy, money, and everything else. Responding with 404 is fine if you’ve optimized the response, otherwise they’re basically leeches sucking the lifeblood from your business.
In a few months (or less) the script should be much less active, but right now there are too many idiots running it, which means that untold numbers of sites are being assaulted with relentless, malicious URL requests by the thousands, because, apparently, it’s not enough to run the script only once per site: they just keep it on auto-loop and repeat the same set of several hundred queries over and over and..
Here is a sampling of the requests made by the “humans.txt” scans:
http://example.com/admin.php?lang=http://www.google.com/humans.txt
http://example.com/zoomstats/libs/dbmax/mysql.php?GLOBALS[\'lib\'][\'db\'][\'path\']=http://www.google.com/humans.txt?
http://example.com/ytb/cuenta/cuerpo.php?base_archivo=http://www.google.com/humans.txt?
http://example.com/yabbse/Sources/Packages.php?sourcedir=http://www.google.com/humans.txt?
http://example.com/xt_counter.php?server_base_dir=http://www.google.com/humans.txt?
http://example.com/xarg_corner_top.php?xarg=http://www.google.com/humans.txt?
http://example.com/xoopsgallery/init_basic.php?GALLERY_BASEDIR=http://www.google.com/humans.txt?&2093085906=1&995617320=2
http://example.com/xarg_corner_bottom.php?xarg=http://www.google.com/humans.txt?
http://example.com/wsk/wsk.php?wsk=http://www.google.com/humans.txt?
http://example.com/xarg_corner.php?xarg=http://www.google.com/humans.txt?
http://example.com/wp-content/plugins/wp-table/js/wptable-button.phpp?wpPATH=http://www.google.com/humans.txt?
http://example.com/wp-content/plugins/wordtube/wordtube-button.php?wpPATH=http://www.google.com/humans.txt?
http://example.com/wp-content/plugins/mygallery/myfunctions/mygallerybrowser.php?myPath=http://www.google.com/humans.txt?
http://example.com/wp-cache-phase1.php?plugin=http://www.google.com/humans.txt?
http://example.com/worldpay_notify.php?mosConfig_absolute_path=http://www.google.com/humans.txt?
http://example.com/wordpress/wp-content/plugins/sniplets/modules/syntax_highlight.php?libpath=http://www.google.com/humans.txt?
.
.
.
This sort of scan is malicious for a number of reasons, but most annoying is the sheer volume of requests now hitting servers looking for a “humans.txt”-related response. Utterly pathetic how many times this scanning script is being executed with nothing more than a single potential payload (i.e., that made possible via the http://www.google.com/humans.txt?
vulnerability). What a waste.
Until the usage of this scan script slows down, you may want to take a moment and check if your server logs show any signs of getting hit — and you’ll know immediately because thousands upon thousands of humans.txt requests are hard to miss.
I could sit here all day and discuss the matter, but my time is limited, and 50-dollar kiddies are simply not worth the effort. Stopping malicious nonsense, however, IS worth my time, so let’s look at a drop-dead simple way to stop the entire “humans.txt” scan cold dead.
Block the humans.txt scanning with .htaccess
As mentioned, all of the requests for this particular scan are targeting, via query string, “humans.txt
”, which makes my job super-easy. Crack open your root .htaccess file and add the following snippet:
# block humans.txt scans
<IfModule mod_rewrite.c>
RewriteCond %{QUERY_STRING} http\:\/\/www\.google\.com\/humans\.txt\? [NC]
RewriteRule .* - [F,L]
</IfModule>
I have this code installed on most of my sites now, and thankfully the pesky scans have stopped completely. Of course, they’ll be back with a new $50 script next month, but until then it’s nice to conserve server resources and keep error logs clear of nonsense.
I can hear it already: “wait, why are you blocking requests for the humans.txt file?” — I’m not, this technique blocks requests that include http://www.google.com/humans.txt?
in the query string. Consider:
# good request, never blocked.
http://example.com/humans.txt
# bad request, always blocked.
http://example.com/?some_path=http://www.google.com/humans.txt?
Hopefully that clears up any potential confusion regarding this simple yet effective solution.
Happy hunting people.
21 responses to “Protect Against Humans.txt Query-String Scans”
Thanks a lot for this!