The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.
Nike.com uses infinite scrolling to load more products on its category pages. And because of that, Nike risks its loaded content not getting indexed.
For the sake of testing, I entered one of their category pages and scrolled down to choose a product triggered by scrolling. Then, I used the site: command to check if the URL is indexed in Google. And as you can see on a screenshot below, this URL is impossible to find on Google:
Of course, Google can still reach your products through sitemaps. However, finding your content in any other way than through links makes it harder for Googlebot to understand your site structure and dependencies between the pages.
To make it even more apparent to you, think about all the products that are visible only when you scroll for them on Nike.com. If theres no link for bots to follow, they will see only 24 products on a given category page. Of course, for the sake of users, Nike cant serve all of its products on one viewport. But still, there are better ways of optimizing infinite scrolling to be both comfortable for users and accessible for bots.
Unlike Nike, Douglas.de uses a more SEO-friendly way of serving its content on category pages.
They provide bots with page navigation based on <a href> links to enable crawling and indexing of the next paginated pages. As you can see in the source code below, theres a link to the second page of pagination included:
Moreover, the paginated navigation may be even more user-friendly than infinite scrolling. The numbered list of category pages may be easier to follow and navigate, especially on large e-commerce websites. Just think how long the viewport would be on Douglas.de if they used infinite scrolling on the page below:
Lets check if thats the case here. Again, I used the site: command and typed the title of one of Otto.des product carousels:
As you can see, Google couldnt find that product carousel in its index. And the fact that Google cant see that element means that accessing additional products will be more complex. Also, if you prevent crawlers from reaching your product carousels, youll make it more difficult for them to understand the relationship between your pages.
To find out, check what the HTML version of the page looks like for bots by analyzing the cache version.
To check the cache version of Target.coms page above, I typed cache:https://www.target.com/p/9-39-…, which is the URL address of the analyzed page. Also, I took a look at the text-only version of the page.
When scrolling, youll see that the links to related products can also be found in its cache. If you see them here, it means bots dont struggle to find them, either.
However, keep in mind that the links to the exact products you can see in the cache may differ from the ones on the live version of the page. Its normal for the products in the carousels to rotate, so you dont need to worry about discrepancies in specific links.
But what exactly does Target.com do differently? They take advantage of dynamic rendering. They serve the initial HTML, and the links to products in the carousels as the static HTML bots can process.
However, you must remember that dynamic rendering adds an extra layer of complexity that may quickly get out of hand with a large website. I recently wrote an article about dynamic rendering thats a must-read if you are considering this solution.
Also, the fact that crawlers can access the product carousels doesnt guarantee these products will get indexed. However, it will significantly help them flow through the site structure and understand the dependencies between your pages.
Its impossible to fully evaluate a website without a proper site crawl. But looking at its robots.txt file can already allow you to identify any critical content thats blocked.
This disallow directive misuse may result in rendering problems on your entire website.
To check if it applies in this case, I used Googles Mobile-Friendly Test. This tool can help you navigate rendering issues by giving you insight into the rendered source code and the screenshot of a rendered page on mobile.
But lets find out if those rendering problems affected the websites indexing. I used the site: command to check if the main content (product description) of the analyzed page is indexed on Google. As you can see, no results were found:
The layout is essential for Google to understand the context of your page. If youd like to know more about this crossroads of web technology and layout, I highly recommend looking into a new field of technical SEO called rendering SEO.
Lidl.de proves that a well-organized robots.txt file can help you control your websites crawling. The crucial thing is to use the disallow directive consciously.
Having a large e-commerce website, you may easily lose track of all the added directives. Always include as many path fragments of a URL you want to block from crawling as possible. It will help you avoid blocking some crucial pages by mistake.
Will users get obsessed with finding that particular product via Walmart.com? They may, but they can also head to any other store selling this item instead.
To fix this problem, Walmart has two solutions:
Implementing dynamic rendering (prerendering) which is, in most cases, the easiest from an implementation standpoint.
IKEA proves that you can present your main content in a way that is accessible for bots and interactive for users.
When browsing IKEA.coms product pages, their product descriptions are served behind clickable panels. When you click on them, they dynamically appear on the right-hand side of the viewport.
Take care of your indexing pipeline and check if:
Your content actually gets indexed on Google.