Checklist For Fixing Strange Technical SEO Problems

  • June 25, 2018
  • SEO

Checklist-For-Fixing-Strange-Technical-SEO-Problems

Problems are of all sorts – some may be easy to solve with the snap of your fingers, while others may be a little out of the ordinary. This is where a checklist will come in handy to dig out places that have deeper issues. And, this is why, we have come up with this 6 point checklist to help you identify and solve certain strange technical SEO issues.

But first, you need to identify if the problem actually matters on a larger scale. If the problem is affecting only a small amount of traffic, or if it is present on only a handful of pages, and you have a whole lot of other actions do execute, the best idea is just to drop it. Also, you need to analyze where you are seeing the problem. Certain factors make for more complex problems that require immediate attention and action. These include big older websites with more legacies, website with lots of client-side JavaScript, and problems related to new Google features, where there is less community knowledge.

Once you have identified that the problem really needs to be identified and addressed, you can move ahead to execute the steps mentioned in this checklist below.

Is Google able to crawl the page once?

Take up a few example pages where you are finding issues, and check if Googlebots have access to the pages. This can be checked in four different ways including IP Address, Country, Robots.txt, and User Agent. This will give you an idea if Google is struggling to fetch the page once. It a failed crawl can be recreated, then it’s likely that Google is failing consistently to fetch the page, which is one of the basic reasons.

Is your website telling Google two different things?

If Google can fetch the page, but it is confused with what you are telling it to do, this could be because someone has messed up the indexing derivatives. This could include any tag hat defines the correct index status or page in the index which it should rank. It could be canonical, no-index, AMP alternate tags, and mobile alternate tags. With these messed up and mixed messages, Google wouldn’t know how to respond, resulting in strange outcomes. The places you can check for the indexing derivatives are HTTP headers, HTML head, Sitemap, Google Search Console settings, and JavaScript-rendered vs. hard-coded directives.

Is Google able to crawl the page consistently?

To understand what Google is seeing, we need to get the log files. Three checks include status codes, resources, and page size follow-ups can help here. If Google isn’t getting 200s consistently in the log files, but we can access the page alright, then there are difference between Googlebots and us. These differences could be in the way of crawling, the times of crawling, or rather it wouldn’t be a bot at all! Working out with the reasons behind these differences could be a little tricky, which is why you should rather speak to a back-end developer to help.

Does Google see what we can see on a one-off basis?

If Google is crawling the page correctly, we need to work-out what Google is seeing on the page. For this, we first try to recreate it once with tools like mobile-friendly test and Fetch & Render. With the outputs, we compare them to what we see in the browser with a tool like Diff Checker. Any differences seen would be typically from JavaScript or cookies.

What is Google actually seeing?

If Google is seeing differently from what we are seeing, there could be a variety of reasons for that. This could include overloaded servers, separate rendering of JavaScript, or caching in the creation of Web pages. We can identify this by working with Google’s cache, site searches for specific content, and storing the actual rendered DOM. Once the problem is identified, you must speak to a developer.

Is Google combining your website with others?

If all of the above is alright, you need to go to a wider landscape to find out the problem. This means to find any duplicate content. This could be found by doing exact searches in Google. If exact copies are found, then it’s possible that they are causing issues. Either you see it straightaway, or you see it changing over time. Once the problem is found, you will have to work with factors like de-duplication of content, lowering syndication, or speed of discovery.

If all of the above checklists have been worked with, it is obvious that Google can consistently crawl all pages as intended, and that Google is being sent consistent signals about the status of the pages. Also, Google is rendering the pages as expected, and is picking out the correct page out of any duplicates. But, if there still lies a problem somewhere, you should be certain that you need to hire a professional SEO and digital marketing company in Bangalore to help you!

About us and this blog

We are a digital marketing company with a focus on helping our customers achieve great results across several key areas.

Request a free quote

We offer professional SEO services that help websites increase their organic search score drastically in order to compete for the highest rankings even when it comes to highly competitive keywords.

More from our blog