This post is about how I cleaned up an incorrect URL in the Google search results. My business site is basically a one-page portfolio site, located at the URL https://monzillamedia.com/. But in the Google search results, the URL was showing as https://monzilla.biz/, which did not exist. So all potential customers were getting an error page. Fortunately I was able to re-acquire the monzilla.biz domain and redirect all traffic to monzillamedia.com. Continue reading »
While solving the recent search engine spoofing mystery, I came across two excellent examples of spoofed search engine bots. This article uses the examples to explain how to identify any questionable bots hitting your site. Continue reading »
Here is a working list of all user agents for the major, top search engines. I use this information frequently for my plugins such as Blackhole for Bad Bots and BBQ Pro, so I figured it would be useful to post the information online for the benefit of others. Having the user agents for these popular bots all in one place helps to streamline my development process. Each search engine includes references and a regex pattern to match all known […] Continue reading »
Yes you can have multiple sitemaps for your site. Create the sitemaps you need, and then specify them in your robots.txt file. For example, here are the robots.txt directives for the two sitemaps used here at Perishable Press: Continue reading »
The setup: I recently launched a new plugin that included a Demo page. To keep things flexible, I set up the Demo as a page on my experimental “Labs” WordPress installation, which is entirely nofollow, noindex and noarchive, meaning that Google can’t legitimately see what’s there. Continue reading »
I recently spent some time analyzing Perishable Press pages as they appear in the search results for Google, Bing, et al. Google Webmaster Tools provides a wealth of information about crawl errors, as well as the URLs of any pages that link to missing content. Combined with your site’s access/error logs, you have everything needed to track down 404 errors and clean up your listings in the search engine results. Continue reading »
Cleaning up my files during the recent redesign, I realized that several years had somehow passed since the last time I even looked at the site’s robots.txt file. I guess that’s a good thing, but with all of the changes to site structure and content, it was time again for a delightful romp through robots.txt. This post summarizes my research and gives you a near-perfect robots file, so you can copy/paste completely “as-is”, or use a template to give you […] Continue reading »
In general, Perishable Press enjoys generous ranking in Google’s search-engine results. The site’s many pages bring in lots of traffic for some great keywords, and a direct search for “Perishable Press” returns the first spot, with eight featured site links even. And recently, after switching servers, traffic increased even further. Things were going well, and it seemed like the perfect opportunity to finally renovate and redesign the site. So I dive in.. And then approximately 24-48 hours after beginning work […] Continue reading »
Many bloggers, designers, and developers take advantage of Google’s free Analytics service to track and monitor their site’s statistics. Along with a Google account, all that’s needed to use Google Analytics is the addition of a small slice of JavaScript into your web pages. For a long time, there was only one way of doing this, and then in 2007 Google improved their GATC code and established a new way for including it in your web pages. Many people switched […] Continue reading »
There are several ways to instruct Google to stay away from various pages in your site: Robots.txt directives Nofollow attributes on links Meta noindex/nofollow directives X-Robots noindex/nofollow directives ..and so on. These directives all function in different ways, but they all serve the same basic purpose: control how Google crawls the various pages on your site. For example, you can use meta noindex to instruct Google not to index your sitemap, RSS feed, or any other page you wish. This […] Continue reading »
In my recent guest post at The Nexus, I discuss Google’s new nofollow policy and suggest several ways to deal with it. In that article, I explain how Google allegedly has changed the way it deals with nofollow links. Instead of transferring leftover nofollow juice to remaining dofollow links as they always have, Google now pours all that wonderful nofollow juice right down the drain. This shift in policy comes as a terrible surprise to many webmasters and SEO gurus, […] Continue reading »
Anyone plugged into the Web these days has heard about how Google has supposedly changed the way it deals with nofollow attributes. According to a number of speculative reports, Google will no longer apply unused nofollow PageRank to other links on the page. So, let’s say that you have some sites that have been PageRank “sculpted” by way of strategically applied nofollow tags. For example, you may have nofollowed all of your comment, footer, or sidebar links. Ever since Google […] Continue reading »
One way to prevent Google from crawling certain pages is to use <meta /> elements in the <head></head> section of your web documents. For example, if I want to prevent Google from indexing and archiving a certain page, I would add the following code to the head of my document: Continue reading »
With the recent Feedburner service outage, many sites across the Web experienced severe drops in their Feedburner subscriber counts. Apparently, Google is requiring all Feedburner accounts to be transferred over to Google by the end of February. In the midst of this mass migration, chaotic subscriber data has been reported to include everything from dramatic count drops and fluctuating reach statistics to zero-count values and dreaded “N/A” subscriber-count errors. Obviously, displaying erroneous subscriber-count data on your site is not a […] Continue reading »
Ever wanted to provide automatic language translations of your web pages without installing another plugin? Here is a valid, SEO-friendly technique that takes advantage of Google’s free translation service. All you need is a PHP-enabled server and you’re good to go. Just copy and paste the following code into the desired location in your page template and enjoy the results. Once in place, this code will produce translation links for eight common languages for every page on your site. Grab, […] Continue reading »
I recently added OpenSearch functionality to Perishable Press. Now, OpenSearch-enabled browsers such as Firefox and IE 7 alert users with the option to customize their browser’s built-in search feature with an exclusive OpenSearch-powered search option for Perishable Press. The autodiscovery feature of supportive browsers detects the custom search protocol and enables users to easily add it to their collection of readily available site-specific search options. Now, users may search the entire Perishable Press domain with the click of a button. […] Continue reading »