ISS 17654 00051b

How does Googlebot and Crawling dynamics work according to Google (video)

Another “myth busting” video by Google, this time explaining the differences between Googlebot, crawling, indexing, rendering and the actual ranking process.

 

 

Once again, this is very general beginner’s lore and no actual myths are busted here. But we still get a bunch of nice explanations and insights, which is a step in a favorable direction (instead of offering documentation only).

 

So, they give a very helpful analogy of a library to explain the differences. The search index being the library, Googlebot being the agent that brings new books to the library, and the ranking algorithm being the librarian that sorts the books by importance and demand.

 

Time for our own myth busting-busting : things that may be unclear in the video

 

So, like we did last time, we will go over topics mentioned in the video and add our own insights as an industry leading Google monitoring tool.

 

Index change rates

While the library analogy is great, it also somewhat hints that the index doesn’t change all that frequently. But according to virtually all observation and countless case studies, this statement can be misleading if interpreted the wrong way. The index changes virtually all the time, which is why we use SERP tracking tools to begin with.

Here’s how the SERP really behaves over time, as shown by our Full SERPs tool:

 

As you can see, the Top 20 roster is quite lively, and this is not even during an update roll out period. This is how a SERP usually behaves in its “natural state”.

Full SERPs can see the entire Top100 search results and how they behave in a single graph. You can see it in action if you are a PRT user, or if you get our 7-day free trial (no credit card details required to activate).

Googlebot partially behaves like a browser, in a sense that it too abides the instructions given in a HTML code and renders the data accordingly.

 

What is crawling

Crawling is the data collecting done by Googlebot spiders to populate the search index. It just populates the index and does not rank it at first. Ranking comes at a later stage using quality ranking signals.

Crawling happens two ways – either on demand because a webmaster submitted a sitemap or organically because it was directed via a link from a different website (hence one of the powers of backlinks).

 

Relation to SEO site audit tools

Site audit SEO tools attempt to emulate Google’s spiders and ranking methods to determine if everything is optimized properly on your website. If you run our SEO site Audit tool for example, you will get an overall score of SEO quality of your website:

 

The score is calculated using Google’s guidelines in mind + highly established ranking factors that were discovered by our tool and SEO experts by countless trial and error. If you get a low score, this is a good reason to start fixing issues that might hinder your potential organic growth. Low score also means your website is more susceptible to getting a lower rank following an algorithm update.

Use the tool if you are our user, of via our 7-day free trial.

 

Crawl frequency

They mention in the video that different content types get crawled at different rates. Content types they deem ‘fresh’ – such as news sites will get crawled frequently. Some observations done by us and other tools suggest high authority news sites can get crawled every few seconds!

E-Commerce websites that change products and prices frequently might also be considered fresh and get crawled frequently. A landing page of a service or business that doesn’t change or meant to be changed frequently will get crawled less frequently to reserve system resources.

Spammy or broken websites will get crawled rarely and might not get crawled at all if they receive a penalty (more on that in our own Google penalty guide).

If you have webpages that are still in development, new webpages that are still being optimized, etc. it is recommended you use the meta NOINDEX command aimed at Google’s spiders, that will tell them to ignore that content. That way they will not consider the content as broken or spammy and badly influence the crawling frequency and ranking process:

 

<head><META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”></head>

 

After you have made the changes and removed the noindex command, you can ask Google for a recrawl and reindexing.

 

Crawl budget

Crawl budget is the limit to Google’s resources as well as taking into consideration your resources. Crawling puts a strain on your servers, because Google spiders are like a slew of visitors. This can be an annoyance if you run a large eCommerce website with lots of data and a crawl event might impact sales because of a limited busy bandwidth.

Taking all of this into account Googlebot crawls websites gradually and in waves. It doesn’t work on a website per website basis, but instead will crawl a chunk of many different websites at once.

As a result, some of your webpages might get indexed and ranked before others.

 

SERP tracking frequency

The same way Google crawls your websites, our own bots “crawl” Google’s search index to get you your actual organic ranks.

Because the search index changes virtually all the time, and recrawls happen routinely (on demand or organically), you MUST have the freshest ranking data, live and straight from their search index.

This is where rank tracking frequency becomes very important. If your SERP tracker checks your ranks once a week and not daily, you will miss out on ranking changes, and will not be able to properly track your Google ranks!

Make sure the tool you use checks your ranking data at least once a day like we do!

We at Pro Rank Tracker, even offer our users an additional ranking check a day on demand!

 

Check out our 7-day free trial now!

 

Rendering

This one was somewhat of a myth that might still prevail – that JavaScript might be unwise for SEO. Luckily, they address this issue in the video.

If Googlebot encounters JavaScript it will simply place it in a queue and render it later separately to better manage the crawling budget.

Google recommends the use of dynamic rendering where a prerendered static HTML is sent to be indexed.

Rendering strategies can improve performance and improve your ranking chances as a result. We mention this in our Perceived Speed guide.

 

Is Good content enough to be ranked?

They emphasize in the video that high quality content should be the main goal. This is a good general guideline to follow but unfortunately field observations shows that it simply isn’t enough. Often good content websites that abide all of Google’s rules still get their ranks demoted following some update. You need to create high quality unique content, but also make sure to follow the recent strategies on SEO communities and blogs, which stem from actual trial and error.

And in any case you need to have notifications set in our tool to notify you as soon as any sudden ranking change is discovered. You can set up any trigger you need:

It can be a drop in ranks, but it can also be a promotion in ranks. Either way, be sure to set them up properly. Here’s the full guide.

Notifications are also included in your 7-day free trial if you are not our user yet!

 

User agents and mobile indexing by OS and device type

User agents are another name for Googlebot’s spiders. A crawler attempt to emulate a real internet user, and it takes into account that a user might be surfing from a mobile device or a desktop one. That’s why Google has unique user agents based on device type, and specific user agents can be given commands via the robots.txt file and ROBOTS meta tag.

For example, if you use separate versions of your website for mobile and desktop, one version might address a mobile user agent, while the desktop version should address the desktop user agent.

What Google doesn’t tell us in this video is that they also show a unique search index depending on what mobile OS the user has and what device type they are using (tablet or phone). This was frequently observed by our tool:

Red is iPhone rank and blue is Android rank

 

What this means is that you must familiarize yourself with ALL of Google’s active indexing types to track everything correctly:

The 2019 Layer Cake of Google Ranks

 

Quality indicators

According to Google, quality indicators are what determines the value of your website and it’s

Fast webpage performance, bounce rates, CTR rates, mobile-friendliness, high quality backlinks, proper structure, good written content that abides their guidelines (no spam etc.), are all part of more than 200 quality indicators that are part of their ranking algorithm – the librarian.

Each quality signal is calculated to generate a hidden score only they know. The higher the score the better the odds of being ranked high for a search term.

The weight of each signal keeps moving and changing. The current estimate is that backlinks remain a significant indicator and anything to do with mobile and performance.

 

Being ready for mobile first

They also address the importance of being ready for mobile-first indexing. We covered the upcoming July Google update last week:

Mobile-First Indexing will be active for all new websites starting July! Details and tips within

 

Basically, this means focusing your efforts on making your content as accessible as it can possibly be for mobile users.

 

Tracking everything correctly

Having said all that there are two things you need to do as far as SERP tracking goes:

  1. Make sure you track all of your Google ranks correctly. This includes taking into account all the relevant device types Google examines:: Desktop, iPhone, iPad, Android phone, and Android tablet.
  2. Run an SEO Site Audit to make sure everything is optimized correctly (you get one for free with our 7-day trial).

 

Conclusion

 

An overall welcome trend by Google making this video series on SEO. For the most part it is well received even if it is slightly obscure and general, and requires some explanations like in this article.

 

About the most advanced SERP tracker on the market

 

Pro Rank Tracker is an industry leading SERP tracker that is entirely built for SEO beginners and experts alike.

 

We are 100% white label! All of our features can be masked with your logo and company details. This means you can showcase our tool as your own tech to impress clients and appear established.

 

We employ DEEP Google rank tracking, so you will see how well you rank for every known ranking condition.

 

This also means EXACT geo targeted location ranks – you can check how you rank in specific areas for any device type (down to neighborhood and airport levels). Including tracking local business that shows on MyBusiness map results and Local Pack.

 

All monthly local and global keyword search volumes are revealed.

 

The complete search engine package – other than the mandatory Google, we also track 5 additional search engines!

 

Yahoo!, Bing, Yandex, and Amazon (including local ranks for all 4 SE). And if you do any video marketing we can also track YouTube ranks and Google’s Video Carousel SERP element.

 

We are the ONLY ones in our field that don’t believe in auto-renewal traps.

 

Our 7-day FREE trial doesn’t require any credit card details to activate.

 

We have full faith in our tech over all the others, and don’t feel any need to entrap our users and scalp them for forgetting to cancel.

 

We are sure that if you give PRT a proper run, you will become our user from your own volition!