Super Plugin Sale! Your Choice: BOGO or 30% Off »
Web Dev + WordPress + Security

Better Robots.txt Rules for WordPress

[ Better Robots.txt Rules for WP ]

Cleaning up my files during the recent redesign, I realized that several years had somehow passed since the last time I even looked at the site’s robots.txt file. I guess that’s a good thing, but with all of the changes to site structure and content, it was time again for a delightful romp through robots.txt. This post summarizes my research and gives you a near-perfect robots file, so you can copy/paste completely “as-is”, or use a template to give you […] Continue reading »

Online Tools for DNS, CHMOD, Data URLs

I’ve always liked the idea of having an “Asides” category for links, notes, and other miscellaneous debris. So here is the first aside post, classified in the newly established asides tag (woo-hoo!). Here are three awesome tools to optimize your workflow. Continue reading »

Protect Your Site with a Blackhole for Bad Bots

[ Black Hole (Vector) ]

One of my favorite security measures here at Perishable Press is the site’s virtual Blackhole trap for bad bots. The concept is simple: include a hidden link to a robots.txt-forbidden directory somewhere on your pages. Bots that ignore or disobey your robots rules will crawl the link and fall into the honeypot trap, which then performs a WHOIS Lookup and records the event in the blackhole data file. Once added to the blacklist data file, bad bots immediately are denied […] Continue reading »

Stop 404s for Mobile Versions of Your Site

[ Stop 404 Requests for Mobile Sites ]

If you’ve been keeping an eye on your 404 errors recently, you will have noticed an increase in requests for nonexistent mobile files and directories, especially over the past year or so. The scripts and bots requesting these files from your server seem to be looking for a mobile version of your site. Unfortunately, they are wasting bandwidth and resources in the process. It has become common to see the following 404 errors constantly repeated in your log files: http://domain.tld/apple-touch-icon.png […] Continue reading »

The New Clearfix Method

Say goodbye to the age-old clearfix hack and hello to the new and improved clearfix method.. The clearfix hack, or “easy-clearing” hack, is a useful method of clearing floats. I have written previously about the original clearfix method and even suggested a few improvements. The original clearfix hack works great, but the browsers that it targets are either obsolete or well on their way. Specifically, Internet Explorer 5 for Mac is now history, so there is no reason to bother […] Continue reading »

Pimp Your 404: Presentation and Functionality

[ Screenshot: Default Apache 404 Error Page ]

I have been wanting to write about 404 error pages for quite awhile now. They have always been very important to me, with customized error pages playing a integral part of every well-rounded web-design strategy. Rather than try to re-invent the wheel with this, I think I will just go through and discuss some thoughts about 404 error pages, share some useful code snippets, and highlight some suggested resources along the way. In a sense, this post is nothing more […] Continue reading »

Tell Google NOT to Index Certain Parts of Your Web Pages

There are several ways to instruct Google to stay away from various pages in your site: Robots.txt directives Nofollow attributes on links Meta noindex/nofollow directives X-Robots noindex/nofollow directives ..and so on. These directives all function in different ways, but they all serve the same basic purpose: control how Google crawls the various pages on your site. For example, you can use meta noindex to instruct Google not to index your sitemap, RSS feed, or any other page you wish. This […] Continue reading »

Yahoo! Slurp too Stupid to be a Robot

I really hate bad robots. When a web crawler, spider, bot — or whatever you want to call it — behaves in a way that is contrary to expected and/or accepted protocols, we say that the bot is acting suspiciously, behaving badly, or just acting stupid in general. Unfortunately, there are thousands — if not hundreds of thousands — of nefarious bots violating our sites every minute of the day. For the most part, there are effective methods available enabling […] Continue reading »

Yahoo! Lies about Obeying Robots.txt Directives

There are two possibilities here: Yahoo!’s Slurp crawler is broken or Yahoo! lies about obeying Robots directives. Either case isn’t good. Slurp just can’t seem to keep its nose out of my private business. And, as I’ve discussed before, this happens all the time. Here are the two most recent offenses, as recorded in the log file for my blackhole spider trap: Continue reading »

Yahoo! Once Again Caught Disobeying Robots.txt Rules

Hmmm.. Let’s see here. Google can do it. MSN/Live can do it. Even Ask can do it. So why oh why can’t Yahoo’s grubby Slurp crawler manage to adhere to robots.txt crawl directives? Just when I thought Yahoo! finally figured it out, I discover more Slurp tracks in my Blackhole trap for bad spiders: Continue reading »

Unexplained Crawl Behavior Involving Tagged Query Strings

I need your help! I am losing my mind trying to solve another baffling mystery. For the past three or four months, I have been recording many 404 Errors generated from msnbot, Yahoo-Slurp, and other spider crawls. These errors result from invalid requests for URLs containing query strings such as the following: https://example.com/press/page/2/?tag=spam https://example.com/press/page/3/?tag=code https://example.com/press/page/2/?tag=email https://example.com/press/page/2/?tag=xhtml https://example.com/press/page/4/?tag=notes https://example.com/press/page/2/?tag=flash https://example.com/press/page/2/?tag=links https://example.com/press/page/3/?tag=theme https://example.com/press/page/2/?tag=press Note: For these example URLs, I replaced my domain, perishablepress.com with the generic example.com. Turns out that listing the plain-text […] Continue reading »

Taking Advantage of the X-Robots Tag

Controlling the spidering, indexing and caching of your (X)HTML-based web pages is possible with meta robots directives such as these: <meta name="googlebot" content="index,archive,follow,noodp"/> <meta name="robots" content="all,index,follow"/> <meta name="msnbot" content="all,index,follow"/> I use these directives here at Perishable Press and they continue to serve me well for controlling how the “big bots”1 crawl and represent my (X)HTML-based content in search results. For other, non-(X)HTML types of content, however, using meta robots directives to control indexing and caching is not an option. An […] Continue reading »

WordPress Tip: Careful with that Autosave, Eugene

[ Screenshot: WordPress Autosave Message (Saved at 2:34:02.) ]

After upgrading WordPress from version 2.0.5 to 2.3.3, I did some experimenting with the “post autosave” feature. The autosave feature uses some crafty ajax to automagically save your post every 2 minutes (120 seconds by default). Below the post-editing field, you will notice a line of text that displays the time of the most recent autosave, similar to the following: Continue reading »

Lessons Learned Concerning the Clearfix CSS Hack

I use the CSS clearfix hack on nearly all of my sites. The clearfix hack — also known as the “Easy Clearing Hack” — is used to clear floated divisions (divs) without using structural markup. It is very effective in resolving layout issues and browser inconsistencies without the need to mix structure with presentation. Over the course of the past few years, I have taken note of several useful bits of information regarding the Easy Clear Method. In this article, […] Continue reading »

Important Note for Your Custom Error Pages

Just a note to web designers and code-savvy bloggers: make sure your custom error pages are big enough for the ever-amazing <cough> Internet Explorer browser. If your custom error pages are too small, IE will take the liberty of serving its own proprietary web page, replete with corporate linkage and poor grammar. How big, baby? Well, that’s a good question. In order for users of Internet Explorer to enjoy your carefully crafted custom error pages, they need to exceed 512 […] Continue reading »

Yahoo! Slurp in My Blackhole (Yet Again)

Yup, ‘ol Slurp is at it again, flagrantly disobeying specific robots.txt rules forbidding access to my bad-bot trap, lovingly dubbed the “blackhole.” As many readers know, this is not the first time Yahoo has been caught behaving badly. This time, Yahoo was caught trespassing five different times via three different IPs over the course of four different days. Here is the data recorded in my site’s blackhole log (I know, that sounds terrible): Continue reading »

Welcome
Perishable Press is operated by Jeff Starr, a professional web developer and book author with two decades of experience. Here you will find posts about web development, WordPress, security, and more »
.htaccess made easy: Improve site performance and security.
Thoughts
All free plugins updated and ready for WP 6.6 dropping next week. Pro plugin updates in the works also complete :)
99% of video thumbnail/previews are pure cringe. Goofy faces = Clickbait.
RIP ICQ
Crazy that we’re almost halfway thru 2024.
I live right next door to the absolute loudest car in town. And the owner loves to drive it.
8G Firewall now out of beta testing, ready for use on production sites.
It's all about that ad revenue baby.
Newsletter
Get news, updates, deals & tips via email.
Email kept private. Easy unsubscribe anytime.