{"id":4111,"date":"2020-05-12T10:11:23","date_gmt":"2020-05-12T10:11:23","guid":{"rendered":"https:\/\/chef-dev.creativegeek.co.il\/?p=4111"},"modified":"2022-10-27T16:19:08","modified_gmt":"2022-10-27T13:19:08","slug":"google-analytics-blindspot-uncovering-up-to-15-hidden-organic-traffic","status":"publish","type":"post","link":"https:\/\/trackingchef.com\/google-analytics\/google-analytics-blindspot-uncovering-up-to-15-hidden-organic-traffic\/","title":{"rendered":"Google Analytics Blindspot: Uncovering up to 15% hidden Organic traffic"},"content":{"rendered":"\n

Some while ago I was working with client who\u2019s business was selling unique and highly detailed reports. Each report took months to compile and would cost their clients a fair amount of money.<\/p>\n\n\n\n

When performing a routine check up of their site\u2019s health in Google\u2019s Search Console<\/a> (aka Webmaster tools), I discovered that the client was ranking organically for the exact title of their latest report. Usually you\u2019d expect this to be good news, but in this case we only had a rather generic page targeting the reports. Digging deeper into the Search Analytics<\/a> report showed the they were actually ranking for the report itself, making it available to anyone freely.<\/p>\n\n\n\n

Aside from the issue of the client giving it\u2019s product away for free, it also highlights how certain assets can gain Organic ranking and drive actual traffic that remains unknown.<\/p>\n\n\n\n

How is this happening?<\/h1>\n\n\n\n

Google\u2019s crawlers can process documents similarly as it does with HTML pages. The most common formats you will find are PDF and Word Documents (Doc\/Docx). If the document\u2019s content is found relevant, you can find it ranking as an organic result alongside \u201cregular\u201d HTML pages.<\/p>\n\n\n\n

A visitor entering these pages will go undetected, as no tracking is available for these (as JavaScript cannot be used). The only way to get a good estimate of their actual traffic is by looking at these documents in the Search Analytics report.<\/p>\n\n\n\n

When examining this situation across several accounts, I discovered 5\u201315% of clicks landing on such pages. This is significant amount of traffic that flies under the radar.<\/p>\n\n\n\n

But wait, why is this so bad?<\/h1>\n\n\n\n

Well the first and obvious reason is\u200a\u2014\u200ayou can\u2019t measure it. Sort of the \u201ctree falls in a forest<\/a>\u201d but with visitors on your site.<\/p>\n\n\n\n

The second reason follows on the first\u200a\u2014\u200aif you can\u2019t measure it you can\u2019t make it better (or if it\u2019s already perfect, replicate it). I want to know if people read through my content or bounced right off. I want to know how they reached it and where they continued next. These measurements (among others) will help improve not only that specific piece, but also additional content that I will create and the overall experience of the users.<\/p>\n\n\n\n

So if it\u2019s ranking well on a topic I\u2019m targeting, I have to know all the nitty gritty details.<\/p>\n\n\n\n


\n\n\n\n

So where do you start?<\/strong><\/h1>\n\n\n\n

Step #1 \u2014 Identify all exposed assets<\/strong><\/h2>\n\n\n\n

Start by mapping out which content on your domain Google has already been indexed.
The syntax for this is quite simple:
\u201csite:mydomain.com inurl:pdf\u201d
This should capture any PDF document within your domain and subdomains.
You can also tweak this to match only a specific subdomain, i.e. blog.example.com<\/p>\n\n\n\n

\"image-9084557\"<\/figure>\n\n\n\n

Pro tip:<\/em><\/strong>
To prevent additional results being hidden by Google, be sure to click
\u201cIf you like, you can repeat the search with the omitted results included.\u201d<\/p><\/blockquote>\n\n\n\n

Go over the results from Google to identify any unwanted results: internal documents, gated content that is available freely etc.<\/p>\n\n\n\n

Step #2 \u2014 Check which assets receive traffic<\/strong><\/h2>\n\n\n\n

Complete this by looking at the documents that actually rank and drive traffic.
This is done in the Search Analytics report (in the Search Console), by filtering the Pages tab to show pages that include the string \u201cPDF\u201d.<\/p>\n\n\n\n

\"image-4477059\"<\/figure>\n\n\n\n

Pro tip:
<\/em><\/strong>On the Pages tab, clicking on a specific page will limit the data to it, so when switching back to the Queries tab you can see the exact searches it ranks for.<\/p><\/blockquote>\n\n\n\n

Now you can tell which of these documents actually drive traffic and estimate the size of you \u201cblind-spot\u201d.<\/p>\n\n\n\n


\n\n\n\n

What can you do?<\/strong><\/h1>\n\n\n\n

The answer here is simply \u2014 \u201cit depends\u201d.<\/p>\n\n\n\n

Case 1: Ranking highly for a specific keyword<\/strong><\/h2>\n\n\n\n

If it ain\u2019t broke, don\u2019t fix it.<\/p>\n\n\n\n

Examine the document in case: does it use your up to date branding? is its content still relevant?
If some minor tweaks will do, then be just sure to update the document using the same link.<\/p>\n\n\n\n

Case 2: Ranking badly for a specific keyword<\/h2>\n\n\n\n

At the end of the day, HTML pages rank better than documents. This is simply due the amount of indicators available on them. So for example, if a certain document competes for a keyword but can only hit second page, consider converting it into a proper HTML page. The content from the document should be adapted to the page\u2019s layout and of course a proper redirect (301) from the document\u2019s original path to its new location. Any boost to the content can also help, for example adding relevant media (images and videos) with proper tagging.<\/p>\n\n\n\n

Case 3: Gated content left unlocked<\/h2>\n\n\n\n

In this case, the best course of action would be redirecting the document\u2019s path to a new page with proper gating.<\/p>\n\n\n\n

A \u201cquick and dirty\u201d solution will be to use a 301 redirect that will replace entirely the existing document\u2019s path.<\/p>\n\n\n\n

A better solution will be returning a 401 response while pointing to the registration page. This will indicate to bots that the content is indeed still there just not available without some identification.<\/p>\n\n\n\n

In order for it to keep some of the existing Organic strength, I strongly recommend keeping a short excerpt available on the page.<\/p>\n\n\n\n


\n\n\n\n

Final thoughts<\/h1>\n\n\n\n

This phenomenon is one you expect to see only in websites that have been active for a long period of time. In most cases, I\u2019ve found that the content was created several years ago and managed to rank undetected for an unknown period of time.<\/p>\n\n\n\n

Now while most of ranking software in the market should identify such documents ranking for your domain, it still doesn\u2019t solve the gap in measuring the actual engagement with them.<\/p>\n\n\n\n

Using the above tactics can help both with bridging this data gap and exhausting existing content to drive more traffic.<\/p>\n\n\n\n

Originally published on my blog on Medium<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"

Some while ago I was working with client who\u2019s business was selling unique and highly detailed reports. Each report took months to compile and would cost their clients a fair amount of money. When performing a routine check up of their site\u2019s health in Google\u2019s Search Console (aka Webmaster tools), I discovered that the client was ranking […]<\/p>\n","protected":false},"author":1,"featured_media":4112,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37,2],"tags":[38],"_links":{"self":[{"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/posts\/4111"}],"collection":[{"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/comments?post=4111"}],"version-history":[{"count":3,"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/posts\/4111\/revisions"}],"predecessor-version":[{"id":5396,"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/posts\/4111\/revisions\/5396"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/media\/4112"}],"wp:attachment":[{"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/media?parent=4111"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/categories?post=4111"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/trackingchef.com\/wp-json\/wp\/v2\/tags?post=4111"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}