Pagination - What Is It And Why Should It Be Actually Implemented?
In simple terms, pagination means ordinal numbering of pages and it involves dividing website content and placing it on separate subpages. Here, content refers to articles, blog entries as well as lists of products in the case of online stores. Each page has its own URL and is perceived as an individual subpage on the site. Pagination is implemented mainly to optimize website loading time. Thus, it has a huge impact on the usability of the site and consequently on its conversion. The longer potential customers need to wait for the graphics to load, the greater the probability that they’ll leave your website and give up shopping. Thanks to properly deployed pagination users see a few or more products from a selected category and the number of pages with similar goods right after entering the website. Such a solution can be a great incentive for browsing through other subpages of your website and it’s definitely better than making users wait for all photos of products from a given category to load. The number of items displayed on one page also needs to be carefully analyzed.Google Crawling vs. Pagination
You already know what pagination is and why it’s beneficial for your website. But have you wondered what to keep in mind when deploying it? Website crawling is extremely important and it takes place when Google robots visit your site, analyze the content of individual subpages and their code. Based on such an analysis, robots decide on which position a given website should be displayed in the search results. The process is affected by numerous factors and improper pagination can have disastrous consequences.Problems Resulting From Improperly Deployed Pagination
Let’s start by analyzing potential issues caused by the incorrect implementation of pagination. Later we’ll move on to discussing various tips that will help you to eliminate errors and make pagination more user-friendly.1. Increasing Internal Duplicate Content
Internal duplicate content is one of the issues you may need to deal with after implementing pagination to your online store. Internal duplicate content means that the same or very similar texts can be found on different subpages on one website. To learn more go to one of our previous articles: Content duplication. How to deal with this issue? Imagine that you run an online clothing store. The “winter jackets” category contains 150 products and a few texts. While implementing pagination you decide that each subpage displayed to users should contain the same texts and 15 models of jackets. Individual pages aren’t excluded from indexing, therefore, Google robots can easily access them. In such a situation, each of the paginated pages with the same texts will increase internal duplicate content even despite having separate URLs. Most paginated pages also have the same titles and meta descriptions which are set automatically. This will negatively affect the positions of all subpages of your website in the search results.2. Burning Site’s Crawl Budget
Another negative consequence of improperly implemented pagination concerns wasting the site’s crawl budget which denotes the maximum number of subpages on a given domain that can be indexed by Google. If your store offers a wide range of goods and all paginated pages with products are crawled, then it indicates that Google robots visited and analyzed each of them. However, there are certain drawbacks to this process as robots may waste time on examining pages with texts of little value, or in the worst-case scenario, with duplicate content. This, in turn, may significantly postpone the process of finding subpages and categories with quality content.3. Being Unuseful For Users
Pagination needs to be tailored to all potential website visitors to serve its purpose. Too small digits marking subsequent product pages won’t be user-friendly and potential customers may decide to leave the store. The situation is pretty similar in the case of too small spaces or too many numbers. You don't have to worry about problems caused by improper pagination if you work with professionals. Try our technical SEO services and sleep safe and sound knowing your website is in good hands.Best Practices For Implementing Pagination
1. Canonical URLs
Creating canonical URLs (called tag rel canonical or rel=canonical) may serve as a solution eliminating duplicate content caused by publishing the same texts on a few subpages in your store. Canonical URLs are meta tags that inform Google robots which URLs on the site are canonical links. Implementing them will allow you to show the bots which subpages shouldn’t be crawled - in most cases only the first page is indexed. If you want to find out more about canonical URLs, visit our blog: rel=canonical. What are canonical URLs for?2. The robots.txt File
The robots.txt file is another solution that helps prevent burning the crawl budget and limit internal duplicate content. This file allows you to effectively communicate with the search engine robots crawling your website. Apart from navigating the bots, you can also limit their access to subpages that shouldn’t be displayed in the search results. When it comes to pagination, in most cases all pages except from the first one are excluded from being crawled.3. The “no index” Tag
Another way to prevent individual subpages from being crawled is by applying the “no index” meta tag. Placing it in the <head> the element of the code of a particular subpage will clearly inform Google robots that you don’t want this page to be analyzed and displayed in the search results. Similarly to the robots.txt file, the no-index tag is implemented on subsequent paginated pages, except from the first one.4. Sitemap
In simple terms, a sitemap is created to navigate search engine robots. Thanks to it, they can move around the website more efficiently and you’re able to indicate URLs that are supposed to be indexed by Google robots. Your sitemap should contain only main category pages with unique content. Therefore, it’s better to avoid placing their subpages with duplicate content that shouldn’t be crawled. However, remember that it’s only a map and not placing a given URL in the sitemap file doesn’t guarantee that Google robots won’t visit it anyway. Therefore, it’s still advisable to implement one of the abovementioned solutions that will clearly inform Google robots that a given subpage isn’t intended to be crawled.5. rel="prev" And rel="next" Attributes
This point applies mainly to websites without internal duplicate content. If your store contains categories that can be grouped in a way that ensures that each page has unique content and products, you ought to use the rel="prev" and rel="next" attributes. In this case, you shouldn’t prevent Google robots from crawling your subpages as these attributes inform them which particular subpages form the whole. Consequently, the search results will provide users with the first page and the beginning of the text. If necessary, you can apply canonical URLs and the rel="prev" and rel="next" attributes simultaneously. Implementing the attributes is a piece of cake. You just need to place them in the HTML or HTTP headers of individual subpages. So, the HTML code of our exemplary clothing store and the winter jackets category will look like this: The attribute on the first page that indicates the next subpage:<link rel="next" href="clothing.store.com/winter-jackets-2"/>
The attributes on the second and each subsequent page that indicate the previous and next subpage:
<link rel="prev" href="clothing.store.com/winter-jackets-1"/>
<link rel="next" href="clothing.store.com/winter-jackets-3"/>
...etc.
The attribute on the last page that indicates the previous subpage:
<link rel="prev" href="clothing.store/winter-jackets-9"/>
6. Usability
Apart from taking care of all technical aspects, you also need to make sure that your pagination is user-friendly and helpful. Focus on elements such as:- The size of links - nowadays elderly and visually impaired people also love browsing the Internet. Moreover, more and more users visit websites via mobile devices and pages with numbers that are too small will be considered useless.
- The space between links to page numbers - not being able to click on the exact page you want to browse, because the spaces between the links are too small, may really drive visitors up the wall. Make sure that both mobile and desktop users can conveniently navigate your site.
- Highlighting the page number the user is currently browsing - this solution makes it easier for visitors to realize how many products they’ve already seen and how many more are left. It’s a good idea to use different colors to highlight page numbers.
- The number of pages - having to go from one page to another after seeing just a few products can be irritating. Try to find a happy medium between the satisfactory number of products and sufficiently quick website loading time.
- Navigation bars - provide navigation bars such as the "next", "previous", "first" and "last" page. Thanks to them, it’ll be much easier for users to conveniently navigate the site as they’ll be able to enter another subpage or go to the beginning or end of the list whenever they want.
- Mobile version = new version - today, mobile users constitute a completely new group of recipients and each element of a website should be tailored also to them. Make sure that mobile users can conveniently browse your paginated pages and navigate the site. It’s possible to design separate pagination patterns for mobile and desktop devices. So why shouldn’t you take advantage of this option?