Success Case SEO Indexing

Table of Contents

One of the most heard mantras among digital marketers that you will ever have heard is that SEO takes time.

It is by far the modern version of the slogan “Rome was not built in a day”.

Therefore, one of the biggest challenges we face in iSocialWeb when we start a new SEO project is to find the “key” that allows us to launch a client’s domain in the SERPs in a short time.

This is precisely the case today:

My Ladies 1

 

(GSC > Yield) – Period from 29 March 2022 to 28 June 2022

A German classifieds website with a multi-country aspiration that we helped get off the ground in just 6 weeks (21 days if you take out the audit time) after more than 12 months of stagnation.

Achieving the following results:

  • ∆ 92% increase in organic visits from 1,513 to 2,907 clicks per day.
  • ∆ 85.76% increase in Google impressions from 14,780 results shown to 27,308 appearances in the SERPs.

And all this, thanks to the fact that in less than 21 days after the SEO audit we have shot up the indexation of the project from 8k URLs to almost 60k.

This explains the surge in impressions and clicks.

Starting Point: A Project Stalled since June 2021

To better understand what has been achieved by our team, it is necessary to undestrand firt the background of the project prior to our intervention.

We found ourselves with a domain that had suffered a significant drop in organic traffic, which the property managed to stabilize.

However, the past 12 months experienced a total lack of progress.

My Ladies 2

 

(GSC > Performance) – Period from 2 March 2021 to 9 May 2022

Daily organic traffic in this period was stuck in a range between 820 to 1,623 visits per day.

Clearly, not enough for our client.

Moreover, in these cases, there is a high risk of relapse. You know that in SEO there are rarely flat trends.

Either it goes up or it comes down.

In general, the situation stabilizes after a few fixes, but the underlying problem is rarely 100% solved, so sooner or later Google will stop displaying the page in the top positions of its results again.

The Challenge: How to open crawling for thousands of URLs without uncovering the SEO pandora’s box

Fortunately, the SEO audit of the domain revealed that there were crawling and indexing problems.

On one hand, through robots.txt, search engines were being prevented from crawling foreign language versions of the website.

At the same time, the profiles created by users on the website were not indexed correctly.

 

A misconfiguration of the category pages linking to parameterised versions of the user’s profiles, which at the same time were pointed to the original version via a rel=”canonical”, was preventing the indexing of that content by search engines.

Thus, leaving a large number of URLs with SEO value with no chance of ranking at all.

In addition, the configuration of the web pages was poor:

Only the main category page was indexed and all other pages were kept with a noindex directive.

These three points:

  • Unoptimised category pagination.
  • Unindexed user’s profiles pages.
  • Versions of the website in other languages blocked.

They were causing thousands of URLs to go uncrawled and unindexed. Preventing the project from getting off the ground.

The Key Point: Will there be enough crawl budget to uncover the entire domain?

It was clear that the project needed to get all URLs with SEO value indexed.

But, if you have been in the SEO world for a while, you might know that an uncovering operation for a high volume of URLs is not without risk.

So to understand each other:

Opening the crawling in a project of this size is like opening a bottle of champagne straight from the drum of a washing machine spinning at 1,800 RPM.

When you open it, you’ll go straight to the roof, but just right after that, you’ll fall off, losing most of the traffic you’ve gained.

So, by opening the indexing to thousands of URLs, you are bound to encounter problems ranging from a lack of crawl budget to cannibalisation, duplicate content, thin content, etc…

For this very reason, all this accumulated force must be controlled. But, how can you handle such an explosive situation?

First, we check the tracking statistics in GSC to identify potential issues and clean up the domain to reasonable parameters as shown in the graph:

My Ladies 5

As you can see in the image above, right now we have 97% of urls displaying 200 code, so the % of URLs with problems is minimal.

Second, we made sure that we would not have any crawl budget problems.

In this case, the domain already had some authority and at the same time, we found clear signals that Google was already crawling, indexing and ranking even pages in other languages blocked by robots.txt.

In fact, some of these URLs already were bringing a lot of organic traffic to the domain.

This removed our initial hesitation to start crawling without fear of running out of resources.

This encouraged us to pursue the strategy.

The Strategy: Opening user’s profiles to Indexing, implementing iSocialWeb’s Paginations Style + technical fixes to facilitate Crawling

Given that in the short term you cannot increase the authority of a domain nor the crawl budget, you have to balance your crawling very well so as not to exhaust the resources that Google has allocated to your project.

As soon as the audit was completed and the property was discussed with the owner, the decision was made:

Step 1: Fix linking to user’s profiles from categories

On June 9th, work began with the correction of user’s parameterized click URLs profiles linked in the category pages.

It was conveyed to ownership that this could be done with datalayer without having to create new URLs.

In addition, all these pages contained a canonical pointing to their original version.

Therefore, the parameterised URLs were replaced at the same time as we redirected them with a 301 to their original counterpart.

This way, we made it clear to Google which version of the listings it should index.

Remember that up to that date, categories were linking to classified ads by means of a URL with parameters containing a rel=”canonical” pointing to the original URL.

The problem is that canonicalization is a very weak signal.

We must bear in mind that canonical is a suggestion (not a directive) and Google was not clear about which URL version to take into account and ended up not indexing any of these URLs with great positioning potential because they were duplicated.

 A mistake that was finally rectified.

Step 2: Open crawling to other language versions of the website

Almost at the same time, we proceeded to act on the different language versions of the website.

The website has 7 different languages.

When we started, 6 of them were blocked in the robots.txt and even with noindex directive.

Here, action was taken in two parts:

  1. For the non-business value languages (ES, RU and HU) we proceeded to obfuscate all links to these versions and put a noindex tag to prevent their indexation.
  2. For the rest of the languages with business value (DE, FR and EN) we removed the noindex tag that made no sense and correctly configured the hreflang tags indicating the language and the country where the business is focused (in this case Swiss CH).

Finally, in both cases we remove the blocking of these directories in the robots.txt. 

So, this left the crawl open only for the language versions we were interested in.

By June 13th, all this was sorted out and we started to see the first positive results.

Step 3: Remove duplicate version of the blog

While we were giving Google time to crawl the user’s profiles, and other pages, we solved a problem of duplicate content related to the blog.

This section of the website had been configured at the same time as a subdomain and a subdirectory:

  •  https://domainname.com/blog/
  •   https://blog.domainname.com/

As a result, all blog posts were being duplicated, increasing the risk of a duplicate content penalty.

So, in this case, we recommend keeping the blog version in the subdirectory and redirecting the subdomain entries.

Explanation: This was done because the sub-directory strategy allows us to leverage the authority of the root domain in all sections while the sub-domain strategy in practice is as if we were working on a new website independent of the root domain.

In this way we avoid the risk of duplicate content and take advantage of the transfer of authority.

Step 4: iSocialWeb’s Pagination Style 

Finally, on 20 June, iSocialweb’s pagination style was implemented.

Up to that day, the client had kept all pages in noindex except for the first page and with a rel=”canonical” pointing to the homepage.

So user’s profiles linked from category pages that were not on the first page did not receive any traction.

Once the new pagination configuration was completed, in less than 48 hours we went from 8k URLs indexed to over 50k URLs without intervention.

informe de cobertura

 

(GSC > Coverage>URLs Validated) – Period from 29 March 2022 to 28 June 2022

In other words, by opening the crawl to search engines, it was not necessary to force indexing.

A very good sign.

By June 26th we had more than 59k pages indexed and 3 days later 67k.

This meant almost doubling organic traffic since we started working on the optimizations on June 9th.

Unravelling Outcome

Many times in SEO we say that you have to wait for a reasonable amount of time to get results.

We almost always talk about months.

However, sometimes it is only a matter of weeks.

As you can see in this sequence of graphs after the changes:

First, Google began to crawl thousands of URLs from the domain it had ignored up to that point:

My Ladies 3

 

(GSC > tracking statistics) – Period from 29 March 2022 to 28 June 2022

Secondly, it began to index the content:

My Ladies 2

 

(GSC > Coverage) – Period from 29 March 2022 to 28 June 2022

And finally, to position our content and send organic traffic:

My Ladies 7

 

(GSC > performance) – Period from 9 May 2022 to 27 June 2022

A few modifications to the robots.txt and indexing directives were enough to generate this increase.

And the fact is, when you have:

  • Thousands of pages crawled without indexing with SEO value, as was the case with user profiles.
  • URLs excluded by noindex tag such as paginations
  •  And other language versions of the website are blocked in robots.txt.

It is relatively easy to change the settings, uncover the crawl and get indexed.

In Conclusion:

When you uncover the crawling in a project of this type, you have to be very careful as you are sure to have thousands of problems with cannibalizations, duplicate content, empty URLs or URLs with no SEO value, etc…

Fortunately, due to the nature of the project, most of the unindexed content was original.

This made things much more easier.

In addition, minor design changes were made that have helped to improve CWVs and optimize loading times.

Significantly minimizing the risk of problems with the crawling budget

allocated by Google to the project.

In fact, it is for this reason that many projects of this size either struggle or are afraid to open the crawl.

Firstly because they are afraid of exhausting their crawling budget and secondly because once opened they are not able to manage everything that comes after: duplicate content, thin content, 5XX errors, etc. …..

However, thanks to iSocialweb’s experience, this is never a problem. 

In fact:

We really enjoy managing these kinds of projects and everything that comes after.

The real challenge now is to get the indexing right.

So now you know, if you need help to optimize the performance, crawling and indexing of your website, contact us and we will help you to make it a success.

4.6/5 - (142 votes)
avatar user 1 1626173824
Web | + posts

Agency specialized in digital marketing engineering. Traffic acquisition, analysis and optimization of results.

If you liked it, please share it:

Related Posts