3 Min Read
I’ve recently been working on a client of Digital Next, trying to interpret the multitude of changes seen in search engine visibility for this domain throughout recent Panda updates from the Google Webmaster team.
To give you more of an idea of what kind of work I’ve completed in recent weeks, I’ll need to give you a very brief run-down of what we’ve seen to provide rationale for these changes.
We’ve recently redeveloped this site to a WordPress-Woo Commerce platform for the purposes of search engine structure, whilst also suiting their specific business model, that was posted live towards the third week of September. Naturally, their search visibility started to rise as Google analysed and indexed their new structure and site content.
However, upon the release of the 27th Panda refresh noted by Moz on the 23rd September, we saw noticeable drops in specific keyword positions across the board:
Investigations showed that some of the inner pages contained duplicated site content, posted as the client implemented new products into their site; a simple mistake to the uneducated of Google and their wonderful updates.
Although Google never seems to favour the typical webmaster, they offer the chance of retribution in the eyes of search engines with thanks to a very useful tool in all Google Webmaster Tools accounts. This facility has always offered a number of tools to make the life of any webmaster a little bit easier, exploring avenues such as search appearance and traffic.
The ‘Fetch as Google’ tool has been heralded by webmasters across the globe trying to fight the many, many updates released in Google’s history; none more so than the Panda Algorithm.
Our own Head of SEO, Gary Douglas, recently wrote a blog discussing the power of the Panda updates throughout the algorithm history, and how people seem to underestimate the power of content in the world of Search Engine Marketing.
WebNots offer a detailed blog on how to use this tool and what purposes it can serve when trying to fight the negative effects of an update or penalty. It lets you check how Google will view your web pages, useful if you’re making changes, and lets you troubleshoot these pages to see how you can improve them further.
The first step to correct this issue was to identify the duplicated content throughout the site, specifically the landing pages that have been indexed by search engines. Tools such as Copyscape & Siteliner will check External & Internal landing page content
A simple site command (site:http//:…..) can tell us when a specific page was last indexed and how it appeared to Google Bots. You should be able to see either old or new content throughout this page.
You can also use tools such as Wayback Machine to check how landing pages appeared at specific dates in the past, although this tool is more for Web Design purposes.
Now that we’ve identified where the Panda has attacked, we needed to replace this old or spammy content with fresh, high quality content relevant to the purpose of that landing page. This is where the ‘Fetch as Google’ tool comes into the fore.
Yes, your new content is live, and yes it’s the best quality content the world can find. But that won’t do any good if you have to wait months for Google and other search engines to find it.
A useful article from For Dummies helps to explain the frequency of Google crawling your website, with deep crawls occurring around once a month. The actual time for Google to read your content and update their results into the index can take up 6 weeks itself, meaning you could wait between 10 and 12 weeks to see any kind of results.
Using the Fetch as Google tool will let you submit your landing pages to Google Webmaster Tools to be crawled and indexed much more quickly, and indicating whether this crawl has been a success or failure. I’ve posted a snapshot of this completed task below.
The downside, however, is that there are only a limited amount of times that this can be done in one session, so it can be very time consuming and take time to resolve if you have a large number of landing pages being hit.
And you have to bank on the fact that these are successful first time around, and that none of them return with some form of HTTP error; another headache.
Needless to say, I was successful on this occasion and each edited page was submitted to Google’s index for consideration.