In this tutorial you will learn how to integrate Google’s new reCatcha model in WordPress Login, Comment, Registration and Lost Password Forms. Continue reading »
g+ Share button Word on the streets is that the new Google+ Share button is the best way yet to benefit from Google’s myriad social-media services and all-important search-engine. And Google makes it SO easy to add the new Share button to your website. This article explains what it is, where it fits in with all the other social-Google stuff, and of course how to add the g+ Share button to any site. Continue reading »
I’ve joked that there a million different Google Analytics WordPress plugins available, but I’ve never been able to find one that’s just dead-simple, plug-n-play, and with clean code and markup, so I wrote my own that does just that: a no-frills way to add the new Google Analytics asynchronous tracking code to all pages on your WordPress-powered site. This analytics plugin is lightweight, fast, and now has over 50,000 80,000 active users via WordPress.org. Continue reading »
Yes you can have multiple sitemaps for your site. Create the sitemaps you need, and then specify them in your robots.txt file. For example, here are the robots.txt directives for the two sitemaps used here at Perishable Press: Continue reading »
The setup: I recently launched a new plugin that included a Demo page. To keep things flexible, I set up the Demo as a page on my experimental “Labs” WordPress installation, which is entirely nofollow, noindex and noarchive, meaning that Google can’t legitimately see what’s there. Continue reading »
I recently spent some time analyzing Perishable Press pages as they appear in the search results for Google, Bing, et al. Google Webmaster Tools provides a wealth of information about crawl errors, as well as the URLs of any pages that link to missing content. Combined with your site’s access/error logs, you have everything needed to track down 404 errors and clean up your listings in the search engine results. Continue reading »
Cleaning up my files during the recent redesign, I realized that several years had somehow passed since the last time I even looked at the site’s robots.txt file. I guess that’s a good thing, but with all of the changes to site structure and content, it was time again for a delightful romp through robots.txt. This post summarizes my research and gives you a near-perfect robots file, so you can copy/paste completely “as-is”, or use a template to give you […] Continue reading »
In general, Perishable Press enjoys generous ranking in Google’s search-engine results. The site’s many pages bring in lots of traffic for some great keywords, and a direct search for “Perishable Press” returns the first spot, with eight featured site links even. And recently, after switching servers, traffic increased even further. Things were going well, and it seemed like the perfect opportunity to finally renovate and redesign the site. So I dive in.. And then approximately 24-48 hours after beginning work […] Continue reading »
There are many ways to optimize your web pages. In addition to reducing HTTP requests and delivering compressed files, we can also minify code content. The easiest way to minify your CSS is to run it through an online code minifier, which automatically eliminates extraneous characters to reduce file size. Minification shrinks file size significantly, by as much as 30% or more (depending on input code). This size-reduction is the net result of numerous micro-optimization techniques applied to your stylesheet. […] Continue reading »
Many bloggers, designers, and developers take advantage of Google’s free Analytics service to track and monitor their site’s statistics. Along with a Google account, all that’s needed to use Google Analytics is the addition of a small slice of JavaScript into your web pages. For a long time, there was only one way of doing this, and then in 2007 Google improved their GATC code and established a new way for including it in your web pages. Many people switched […] Continue reading »
I recently added to my growing library of image-preloading methods with a few new-&-improved techniques. After posting that recent preloading article, an even better way of preloading images using pure CSS3 hit me: .preload-images { background: url(image-01.png) no-repeat -9999px -9999px; background: url(image-01.png) no-repeat -9999px -9999px, url(image-02.png) no-repeat -9999px -9999px, url(image-03.png) no-repeat -9999px -9999px, url(image-04.png) no-repeat -9999px -9999px, url(image-05.png) no-repeat -9999px -9999px; } Using CSS3’s new support for multiple background images, we can use a single, existing element to preload all […] Continue reading »
Preloading images is a great way to improve the user experience. When images are preloaded in the browser, the visitor can surf around your site and enjoy extremely faster loading times. This is especially beneficial for photo galleries and other image-heavy sites where you want to deliver the goods as quickly and seamlessly as possible. Preloading images definitely helps users without broadband enjoy a better experience when viewing your content. In this article, we’ll explore three different preloading techniques to […] Continue reading »
There are several ways to instruct Google to stay away from various pages in your site: Robots.txt directives Nofollow attributes on links Meta noindex/nofollow directives X-Robots noindex/nofollow directives ..and so on. These directives all function in different ways, but they all serve the same basic purpose: control how Google crawls the various pages on your site. For example, you can use meta noindex to instruct Google not to index your sitemap, RSS feed, or any other page you wish. This […] Continue reading »
In my recent guest post at The Nexus, I discuss Google’s new nofollow policy and suggest several ways to deal with it. In that article, I explain how Google allegedly has changed the way it deals with nofollow links. Instead of transferring leftover nofollow juice to remaining dofollow links as they always have, Google now pours all that wonderful nofollow juice right down the drain. This shift in policy comes as a terrible surprise to many webmasters and SEO gurus, […] Continue reading »
Anyone plugged into the Web these days has heard about how Google has supposedly changed the way it deals with nofollow attributes. According to a number of speculative reports, Google will no longer apply unused nofollow PageRank to other links on the page. So, let’s say that you have some sites that have been PageRank “sculpted” by way of strategically applied nofollow tags. For example, you may have nofollowed all of your comment, footer, or sidebar links. Ever since Google […] Continue reading »
One way to prevent Google from crawling certain pages is to use <meta /> elements in the <head></head> section of your web documents. For example, if I want to prevent Google from indexing and archiving a certain page, I would add the following code to the head of my document: Continue reading »