Protect Against Malicious POST Requests

[ Protect yourself ] Whether you like it or not, there are scripts and bots out there hammering away at your sites with endless HTTP “POST” requests. POST requests are sort of the opposite of GET requests. Instead of getting some resource or file from the server, data is being posted or sent to it.

To illustrate, normal surfing around the Web involves your browser making series of GET requests for all the resources required for each web page. HTML, JavaScript, CSS, images, et al. But whenever you leave a comment, tweet something, or share on Facebook, the browser is sending your content, along with other data, to the server as a POST request. Such is perfectly normal and expected part of how the Web works.

The Problem

The problem is that, on a typical server, there are no restrictions on POST requests. So perpetrators can run scripts that make endless POST requests to unsuspecting sites 24 hours a day, 7 days a week, 365 days a year. For example, if you have a contact form on your site, chances are good that it’s being bombarded with copious volumes of nasty POST spam. Likewise for other forms or scripts that utilize Ajax — even static HTML sites with absolute zero post-handling are repeatedly probed with malevolent POST requests.

As we’ll see a bit later in the article, illicit POST requests typically include large amounts of data. To prevent detection and filtering, POST spammers like to convert, encode, and obfuscate their code, which can dramatically increase overall request size. Over time, the cumulative effect of massive POST requests to your site may be experienced as a slow sucking sound, as server resources and bandwidth are gobbled up by relentless spam scripts and other malicious behavior.

Even worse is the perpetual threat of some snot-nosed loser finding a vulnerability and exploiting your site. Nobody has time for that. So in this article, we’ll learn more about POST requests, how to monitor them, and most importantly how to protect your online assets against malicious POST activity.

Server Response

Each server has its own specific configuration, but in general, POST requests are handled either successfully or not. For example, if you have a contact form and someone is making POST requests, the requests will go through successfully if they meet the requirements specified in the contact script. Otherwise, POST requests that fail validation or otherwise don’t meet the form requirements will either die quietly or receive a 404 error in response.

The persistent threat with POST requests, however, is that they may reveal a vulnerability somewhere on the server. This is the primary reason why spammers and other nefarious types spend time and resources endlessly posting their malicious payloads via POST requests: they’re looking for opportunities to exploit, plunder, and steal your online assets. Plain and simple.

For example, if your contact form, say, is not properly secured, it could be possible for spammers to use the form to send out mass emails without your knowledge. Or if some POST request reveals a vulnerability and some criminal gains access, chances are that you wouldn’t realize until it’s too late. As we’ll see next, detecting such activity in your server’s access logs may prove difficult, as the actual POST content is typically not included in the recorded log data.

Detecting POST Requests

Detecting POST requests in server logs requires a simple search for the word “POST” (this may seem obvious, but you may be surprised). So looking through your latest access logs on an Apache server, the logged HTTP requests will look similar to these: - [27/Jul/2013:08:43:50 -0700] "POST / HTTP/1.0" 404 14566 "" "Mozilla/5.0 (Windows NT 6.0; rv:16.0) Gecko/20100101 Firefox/16.0" - [27/Jul/2013:08:43:51 -0700] "POST / HTTP/1.0" 404 14566 "" "Mozilla/5.0 (Windows NT 6.0; rv:16.0) Gecko/20100101 Firefox/16.0" - [27/Jul/2013:08:44:39 -0700] "POST / HTTP/1.0" 404 14566 "" "Opera/9.80 (Windows NT 6.1; WOW64; MRA 6.0 (build 5976)) Presto/2.12.388 Version/12.11"

For each logged request*, we have the visitor’s IP, date/time, the request, the response, byte size, and the user agent. While all of the information is useful for various analyses, the focus here is the request, which itself consists of the following details:

  • Type of request (e.g., GET, POST, HEAD, etc.)
  • The request URI (e.g., /some-page/ or /)
  • HTTP Protocol (e.g., HTTP/1.0 or HTTP/1.1)

These data make it easy to identify POST requests, but analysis is difficult without knowing the actual contents of the POST request. This explains why, if you are monitoring 404 errors, say, you may occasionally get reports of 404 errors for pages and resources that DO exist. For example, if someone/something makes a POST request to a URL that is not equipped to handle it, the server may return a 404 “Not Found” response. So if you’re getting 404 errors for pages and resources that actually exist on the server, it could be that POST requests are the undetected culprit.

*Note that log files may vary depending on server configuration. You can learn more about deciphering server access logs in the Apache Docs.

Monitoring POST Requests

The good news is that, once you know what to look for, it’s just a matter of writing the code to access the information. In this case, we can monitor POST requests in much the same way that we monitor 404 requests. Consider the following PHP code, which is typically included in 404-monitoring scripts:

$http_host    = $_SERVER['HTTP_HOST'];
$server_name  = $_SERVER['SERVER_NAME'];
$remote_ip    = $_SERVER['REMOTE_ADDR'];
$remote_host  = $_SERVER["REMOTE_HOST"];
$request_uri  = $_SERVER['REQUEST_URI'];
$http_ref     = $_SERVER['HTTP_REFERER'];
$query_string = $_SERVER['QUERY_STRING'];
$user_agent   = $_SERVER['HTTP_USER_AGENT'];

That will provide a wealth of data regarding the request, but many such scripts that I’ve seen are missing a key piece of information, the actual POST content:

$post_vars = clean(file_get_contents('php://input'));

Including that line in your error-monitoring script gives you ringside access to all of the POST data that’s being posted to your site. Once implemented, be prepared to experience first-hand the landslide of POST-request garbage. Here are a few random samples of malicious POST requests collected from the bookstore at


It’s not pretty, but we’re now equipped with the information needed to effectively protect against such malicious POST requests. For more information about configuring and monitoring Apache access and error logs, check out my book .htaccess made easy.

POST Control & Protection

Now that we’ve seen what POST requests are, why they can be dangerous, and how to monitor them, it’s time to get serious about security and lock down POST requests according to our specific needs.

The key to forging your own strategy for handling POST requests is understanding which requests are valid or accepted on your site. Once you have that information, it’s straightforward to put together a set of rules to block invalid requests while ensuring that valid POST data is allowed normal access. With this in mind, let’s look at some .htaccess techniques for protecting against unwanted POST data.

Note: these techniques apply to Apache servers and may be added via root (or other) .htaccess file, or added via the config file, httpd.conf. Please make backups of any relevant files, and test thoroughly before going live.

Deny all POST requests

For sites that do not accept them, denying all POST requests is an ideal solution. For example, if your site is entirely static HTML with no forms or submitted data of any kind (for example, a one-page portfolio site), protecting against rogue POST requests is as simple as adding this to the root .htaccess file:

# deny all POST requests
<IfModule mod_rewrite.c>
	RewriteRule .* - [F,L]

This uses Apache’s rewrite module to detect and deny any/all POST requests. Plain and simple. No edits required, but you may want to redirect blocked requests to a specific page or file; to do so, replace the rewrite rule with this:

RewriteRule .* /custom.php [R=301,L]

Then change the redirect location, /custom.php, to whatever is required.

Deny POST requests using HTTP 1.0

I’ve seen a lot of ill POST requests coming in as HTTP 1.0, rather than HTTP 1.1. This isn’t a super-big deal, but there is more room for mischief with HTTP 1.0, primarily because the protocol does not require a Host header. Thus, another strategy for dealing with unwanted POST requests (and other types of requests, for that matter) is to require the HTTP 1.1 protocol.

# require HTTP 1.1 for POST
<IfModule mod_rewrite.c>
	RewriteCond %{THE_REQUEST} ^POST(.*)HTTP/(0\.9|1\.0)$ [NC]
	RewriteRule .* - [F,L]

When added to .htaccess, the server will respond with a 403 Forbidden request for any POST request that isn’t sent via HTTP 1.1.

Whitelist POST requests

The strategy that I prefer is to whitelist POST requests for certain resources. For example, on one site I have a contact form handled by a PHP file, contact.php. Here is the .htaccess to do the job:

# whitelist POST requests
<IfModule mod_rewrite.c>
	RewriteCond %{REQUEST_URI} !/contact.php [NC]
	RewriteCond %{REMOTE_ADDR} ! 
	RewriteRule .* - [F,L]

Notice that the localhost IP ( is also included, as the contact form also communicates with the server via Ajax requests. Additional resources may be whitelisted using the following pattern:

RewriteCond %{REQUEST_URI} !/contact.php [NC]
RewriteCond %{REQUEST_URI} !/business.php [NC]
RewriteCond %{REQUEST_URI} !/pleasure.php [NC]

Pro Tip: Because we are negating each of these files, the [NC] flag is declared for each line. If these were positive matches (i.e., without the exclamation point !), we would use the [NC,OR] flag for each line EXCEPT the last line, like so:

RewriteCond %{REQUEST_URI} /contact\.php [NC,OR]
RewriteCond %{REQUEST_URI} /business\.php [NC,OR]
RewriteCond %{REQUEST_URI} /pleasure\.php [NC]

Like that old TV ad, “the more you know”..

Control POST via referrer

Another effective strategy depending on your setup is to limit POST requests based on referrer. It’s not 100% foolproof as referrer information can be spoofed, but I’ve seen used effectively to cut down on illicit POST requests.

# allow POST based on referrer
<IfModule mod_rewrite.c>
	RewriteCond %{REQUEST_URI} /contact\.php
	RewriteCond %{HTTP_REFERER} !(.*)*) [OR]
	# RewriteCond %{HTTP_USER_AGENT} ^$
	RewriteRule .* - [F,L]

In this code, replace /contact\.php with your own file(s), and also replace with your own domain name. Once in place, this technique will deny any POST request that isn’t sent from your domain.

Note that a commented-out exception is included to allow blank/empty user agents. This may be required depending on the functionality of the protected resource.

Advanced POST control

In the previous techniques, we use a variety of sever variables to control access of POST requests. So in addition to each of these variables:


..we may use any of the variables available to mod_rewrite. For example, if the server is getting hammered by obnoxious POST requests from the following host:

# deny POST based on host
<IfModule mod_rewrite.c>
	RewriteCond %{HTTP_HOST} .*(dfw\.dsl-w\.verizon\.net).* [NC]
	RewriteRule .* - [F,L]

To learn more about controlling traffic based on server variables, check out Eight Ways to Blacklist with Apache’s mod_rewrite.

Testing POST requests

A key part of implementing .htaccess techniques is testing thoroughly. As discussed, it’s possible to record the content of POST requests using the following line of PHP:

$post_vars = clean(file_get_contents('php://input'));

Let’s focus a bit more and walk through and example of this technique. Consider the following plan:

  1. Set up a simple email alert for all POST requests
  2. After detecting foul play, use .htaccess to deny access
  3. Use available tools to test thoroughly the new .htaccess directives

These basic steps may be used to develop a personalized strategy for protecting against malicious POST requests. With simplicity in mind, here’s one way to set it up.

Step 1: PHP

Create a new PHP file named error-handler.php (or whatever). Inside, include this code:

$email = '';
$subject = 'Testing POST requests';
$remote_ip = $_SERVER['REMOTE_ADDR'];
$remote_host = $_SERVER["REMOTE_HOST"];
$user_agent = $_SERVER['HTTP_USER_AGENT'];
$method = mm_strip($_SERVER['REQUEST_METHOD']);
$protocol = mm_strip($_SERVER['SERVER_PROTOCOL']);
$post_vars = mm_strip(file_get_contents('php://input'));
$message = 'IP: ' . $remote_ip . "\n":
$message .= 'HOST: ' . $remote_host . "\n":
$message .= 'User Agent: ' . $user_agent . "\n":
$message .= 'Method: ' . $method . "\n";
$message .= 'Protocol: ' . $protocol . "\n";
$message .= 'POST Vars: ' . $post_vars . "\n";
if ($method == 'POST') mail($email, $subject, $message);

This is a much-simplified version of the 404 script posted at WP-Mix, so feel encouraged to customize as desired.

Note that currently the script only reports POST errors; to monitor all 403 and 404 requests, modify the last line like so:

mail($email, $subject, $message);

Step 2: .htaccess

With our PHP error-handler in place, add the following directives to the root .htaccess file:

ErrorDocument 403 /error-handler.php
ErrorDocument 404 /error-handler.php

These lines instruct Apache to serve the handler script for all 404 and 403 errors. It’s not meant to be a permanent modification, rather just long enough to collect see what’s happening with POST requests on your domain.

Note that any long-term error monitoring script should also send the proper headers given the type of response. For now, we’re gonna keep it simple and continue with the tutorial.

Step 3: monitoring

With our basic monitoring system in place, it’s time to kick back and wait for something to bite. Depending on how much traffic flows through your site, collecting data may require more or less time.

For this example, let’s say that we see a disconcerting number of malicious POST requests reporting the following info:

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.92 Safari/537.4
Method: POST
Protocol: HTTP/1.0
POST Vars: name=AdhedAtmord&amp;;;contact_submit=Send

With this sort of information, we’re well-equipped to crack down on bad POST requests using the previously covered techniques.

Step 4: controlling access

Armed with details of the types of POST requests we would like to block, we notice that most of the requests are reporting, say, “Super_Spam_Sucker” as a common user agent.

Such a scenario is optimal because it enables the widest possible protection with the least possible amount of code. Sure, we could instead and/or also deny access based on host, IP, and so forth, but in this example it will be most effective to simply deny access to the offending user agent.

Before adding any .htaccess directives, however, it’s important to do our due diligence and investigate any suspected parties. After a quick search online, it becomes clear that “Super_Spam_Sucker” is indeed a known/reported perpetrator, so with a clear conscience we add the following lines to .htaccess:

# deny POST from Super_Spam_Sucker
<IfModule mod_rewrite.c>
	RewriteCond %{HTTP_USER_AGENT} Super_Spam_Sucker [NC]
	RewriteRule .* - [F,L]

Step 5: testing

Up until a few years ago, there weren’t any decent online tools for testing different types of HTTP requests. Now we have the luxury of awesome services like, which enables testing of variously configured HTTP requests, including options for HTTP headers, request type, user agent, and more. Visiting the site, we fill in the details of our test POST request:

[ Test POST request ]

As shown here, we’ve specified that the request is a POST request, and that the user agent is “Super_Spam_Sucker”. Looks good, so we hit the submit button and perform the test:

[ Test POST response ]

“403 Forbidden” — perfect, exactly what we want. If instead a 200 OK response is returned, something isn’t configured properly, or there could be something else interfering. Generally the process is straightforward, but it may take some time to dial it in.

For further insight into POST requests, check out Testing HTTP Requests here at Perishable Press. Also, here are some more useful online tools for testing HTTP requests:

Going Further

To really lock things down, you can deny access to other unused/unnecessary types of requests by adding something similar to the following:

# deny unused request types
<IfModule mod_rewrite.c>
	RewriteCond %{REQUEST_METHOD} ^(delete|head|trace|track) [NC]
	RewriteRule .* - [F,L]

And even with all that we’ve covered in this article, we’re only scratching the surface of the configuration, customization, and protection made possible via Apache directives applied via the configuration file or via .htaccess. To really dig in, check out my book, .htaccess made easy for more information about controlling access and securing your site with Apache and .htaccess.

Additionally, here are some related resources to go further: