Most of the time, when I catch scumbags attempting to spam, scrape, leech, or otherwise hack my site, I stitch up a new voodoo doll and let the cursing begin. No, seriously, I just blacklist the idiots. I don’t need their traffic, and so I don’t even blink while slamming the doors in their faces. Of course, this policy presents a bit of a dilemma when the culprit is one of the four major search engines. Slamming the door on […] Continue reading »
Figuratively speaking, hunting down and killing spammers, scrapers, and other online scum remains one of our favorite pursuits. Once we have determined that a particular IP address is worthy of banishment, we generally invoke the magical powers of htaccess to lock the gates. When htaccess is not available, we may summon the versatile functionality of PHP to get the job done. This method is straightforward. Simply edit, copy and paste the following code example into the top of any PHP […] Continue reading »
Of all the bizarre, nonsensical, and pointless spam we have received so far this year, this one takes the cake. It was delivered to our designated spam account earlier this month as a plain-text email, which opens with an explanation. Apparently, “Bob Diamond” is “an Hiring Manager” looking to advertise a couple of important items. The first ad seems remotely realistic, but the second ad.. it’s like, “teddy bear features” out of nowhere — you can’t be serious. Continue reading »
Web developers trying to control comment-spam, bandwidth-theft, and content-scraping must choose between two fundamentally different approaches: selectively deny target offenders (the “blacklist” method) or selectively allow desirable agents (the “opt-in”, or “whitelist” method). Currently popular according to various online forums and discussion boards is the blacklist method. The blacklist method requires the webmaster to create and maintain a working list of undesirable agents, usually blocking their access via htaccess or php. The downside of blacklisting is that it requires considerable […] Continue reading »
In our previous article on creating spamless email links via JavaScript, the presented method, although relatively simple to implement, is not the most effective solution available. Spambots, email harvesters, and other online scumbags relentlessly advance their scanning technology, perpetually rendering obsolete yesterday’s methods. Continue reading »
In our never-ending battle against spammers, leeches, scrapers, and other online undesirables, we have implemented several powerful security measures to improve the operational integrity of our perpetual virtual existence. Here is a rundown of the new behind-the-scenes security features of Perishable Press. Continue reading »
What we have here is an excellent method for preventing a great deal of blog spam. With a few strategic lines placed in your .htaccess file, you can prevent spambots from dropping spam bombs by denying access to all requests that do not originate from your domain. Continue reading »
If you have yet to encounter the content-scraping site, bitacle.org, consider yourself lucky. The scum-sucking worm-holes at bitacle.org are well-known for literally, blatantly, and piggishly stealing blog content and using it for financial gains through advertising. While I am not here to discuss the legal, philosophical, or technical ramifications of illegal bitacle behavior, I am here to provide a few critical tools that will help stop bitacle from stealing your content. Continue reading »
Recently, every website on our primary server was simultaneously attacked. The offending party indiscriminately replaced the contents of every index file, regardless of its extension or location, with a few vulgar lines of code, which indicated intention, identity, and influence. Apparently, the attack occurred via Germany, through a server at the University of Hamburg (uni-hamburg.de). This relatively minor attack resulted in several hours of valuable online education. In this article, it is our intention to share experience with website attack […] Continue reading »
A list of HTTP Error codes and corresponding definitions: Informational Codes 100 — Continue 101 — Switching Protocols Successful Client Requests 200 — OK 201 — Created 202 — Accepted 203 — Non-Authoritative Information 204 — No Content 205 — Reset Content 206 — Partial Content Client Request Redirected 300 — Multiple Choices 301 — Moved Permanently 302 — Moved Temporarily 303 — See Other 304 — Not Modified 305 — Use Proxy 307 — Temporary Redirect Client Request Errors […] Continue reading »