Recently launched a simple one-page website called Wutsearch. Wutsearch is a search-engine launchpad that I use for my homepage. Usually I work with several browsers open, each set to open multiple tabs on startup. For a long time, all those tabs were set to the Google homepage. Then DuckDuckGo. Now, it’s Wutsearch 🚀 Continue reading »
One way to prevent Google from crawling certain pages is to use <meta /> elements in the <head></head> section of your web documents. For example, if I want to prevent Google from indexing and archiving a certain page, I would add the following code to the head of my document: Continue reading »
A great way to improve your CSS skills is to check out the stylesheets used by other websites. Digging behind the scenes and exploring some applied CSS provides new ideas and insights about everything from specificity and formatting to hacks and shortcuts. Learning CSS by reading about ideal cases and theoretical applications is certainly important, but actually seeing how the language is applied in “real-world” scenarios provides first-hand knowledge and insight. While there are millions of standards-based, CSS-designed websites to […] Continue reading »
I need your help! I am losing my mind trying to solve another baffling mystery. For the past three or four months, I have been recording many 404 Errors generated from msnbot, Yahoo-Slurp, and other spider crawls. These errors result from invalid requests for URLs containing query strings such as the following: https://example.com/press/page/2/?tag=spam https://example.com/press/page/3/?tag=code https://example.com/press/page/2/?tag=email https://example.com/press/page/2/?tag=xhtml https://example.com/press/page/4/?tag=notes https://example.com/press/page/2/?tag=flash https://example.com/press/page/2/?tag=links https://example.com/press/page/3/?tag=theme https://example.com/press/page/2/?tag=press Note: For these example URLs, I replaced my domain, perishablepress.com with the generic example.com. Turns out that listing the plain-text […] Continue reading »
Keeping track of your access and error logs is a critical component of any serious security strategy. Many times, you will see a recorded entry that looks legitimate, such that it may easily be dismissed as genuine Google fare, only to discover upon closer investigation a fraudulent agent. There are many such cloaked or disguised agents crawling around these days, mimicking various search engines to hide beneath the radar. So it’s always a good idea to implement a procedure for […] Continue reading »
The Internet Archive Wayback Machine is a trip into the online past, offering glimpses of ancient website relics. Reaching back through the virtual dark ages of 1996, the Wayback Machine chronicles over 55 billion pages. Although many of the pages appear incomplete due to missing images, the Wayback Machine provides an invaluable resource, enabling users to experience and learn from the arcane internet of yesterday. Continue reading »
About the Robots Exclusion Standard: The robots exclusion standard or robots.txt protocol is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website. The information specifying the parts that should not be accessed is specified in a file called robots.txt in the top-level directory of the website. Notes on the robots.txt Rules: Rules of specificity apply, not inheritance. Always include a blank line between rules. Note also that not all robots […] Continue reading »