iSocialWeb

Success story

SEO Indexing Case Study

How to facilitate crawling for Google and double organic traffic in 21 days.

One of the most-heard mantras among digital marketing professionals, one you'll be tired of hearing, is that SEO takes time. We can say it's the modern version of "Rome wasn't built in a day".

That's why one of the biggest challenges we face at our SEO Agency when kicking off a new project is finding the button to push that lets us launch a client's domain into the SERPs in a short time. This is precisely the case we'll see today.

In this SEO Indexing Case Study we'll see how we helped a German-language classifieds site with multi-country ambitions , stuck for over 12 months, take off in just 6 weeks (21 days if we leave out the audit time).

SEO Indexing Case Study: German classifieds site took off in just 6 weeks

In just 21 days, we achieved the following results:

And all of this thanks to the fact that in just over 21 days after the SEO audit we jumped the project's indexation from 8k URLs to almost 60k. Which explains the surge in impressions and clicks.

Starting point: a project stuck since June 2021

To better understand what we achieved, it's necessary to describe the project's state before our intervention. We started with a domain that had suffered a major organic traffic drop that the client managed to stabilize. However, over the last 12 months there was a total lack of progress.

Total lack of progress over the last 12 months

Daily organic traffic during this period was stuck in a band oscillating from 820 to 1,623 daily visits. Insufficient for our client. Moreover, in these cases, there's a high risk of relapse. As you know, in SEO, flat trends rarely exist. You either go up or go down.

And as a rule, the situation stabilizes after making some fixes, but the root problem is rarely 100% resolved, so Google sooner or later stops showing the page in the top positions of its results.

The challenge: how to open crawling for thousands of URLs without lifting the SEO Pandora's box lid

Fortunately, the SEO audit of the domain revealed that there were crawling and indexing issues:

These three points:

Were causing thousands of URLs to go uncrawled and unindexed. Preventing the project from taking off.

The question: will we have enough crawl budget to uncover the entire domain?

It was clear the project needed to get all the URLs with ranking value indexed. But if you've been in SEO for a while, you know that a "lid-off" operation for a large volume of URLs isn't without risks.

Put simply:

Opening up crawling on a project this size is like opening a champagne bottle fresh out of a washing machine spinning at 1,800 RPM.

When you do it, you'll shoot up like foam but right after comes the fall, losing most of the traffic gained.

Because by enabling indexing of thousands of URLs, you're bound to run into problems ranging from crawl budget shortages to cannibalizations, duplicate content, empty URLs and so on.

For that reason, all that built-up force must be managed. So how do you handle a situation as explosive as this?

First, we checked the crawl statistics in GSC to identify problems and clean up the project to reasonable parameters like those in the chart:

GSC crawl statistics check to identify problems and clean up the project

As you can see in the image above, right now we have 97% of URLs returning 200 codes, so the percentage of problematic URLs is minimal.

Second, we made sure we wouldn't have crawl budget problems.

In this case, the domain already had some authority and at the same time we found evidence that Google was already crawling, indexing and even ranking pages in other languages blocked by robots.txt. In fact, some of these URLs were already bringing quite a bit of organic traffic to the domain.

This cleared our initial doubts about opening up crawling without fear of running out of resources. Which encouraged us to move forward with the strategy.

The solution: opening up indexing to listings, iSocialweb-style paginations plus technical fixes to ease crawling

Since in the short term you can't increase a domain's authority or its crawl budget, you have to balance the opening carefully so you don't exhaust the resources Google has allocated to your project. Right after finishing the audit and discussing with the client, it was decided:

Step 1: Fix the linking from categories to listings

On June 9, work began on fixing the categories that linked to listing URLs via variables to parameterize the clicks. We conveyed to the client that this could be done with a datalayer without needing to create new URLs. Also, all these pages contained a canonical pointing to their original version.

Therefore, we replaced the parameterized URLs while at the same time redirecting them with a 301 to their original counterpart. This way we made it clear to Google which version of the listings should be indexed.

Remember that up to that date, from the categories, listings of classified ads were being linked via a URL with parameters that contained a rel="canonical" pointing to the original URL.

The problem is that this is a very weak signal.

Keep in mind that canonical is a suggestion (not a directive), and Google didn't know which URL version to consider and ended up not indexing any of these URLs with great ranking potential because they were duplicated. A mistake finally put right.

Step 2: Open up crawling to the site's versions in other languages

Almost at the same time, we acted on the different language versions of the site. The site had 7 different languages. When we kicked off, 6 of them were blocked in robots.txt and even with a noindex directive.

Here, we acted on two fronts:

Finally, in both cases we removed the blocking of these directories in robots.txt. This way crawling was only open for the language versions that interested us. All of this was fixed by June 13, and we started noticing the first positive results.

Step 3: Remove the blog's duplicate version

While giving Google time to crawl listing and pagination URLs, we solved a duplicate-content issue related to the blog.

This section of the site had been configured at the same time as both a subdomain and a subdirectory:

So all blog posts were being duplicated, increasing the risk of a penalty for duplicate content. So, in this case, we recommended keeping the blog version in the subdirectory and redirecting the subdomain posts.

Explanation: This was done because the subdirectory strategy lets you leverage the root domain's authority across all sections, whereas the subdomain strategy is in practice like working on a brand-new site independent from the root domain.

This way we eliminated the duplicate-content risk and took advantage of the authority transfer.

Step 4: iSocialWeb-style paginations

Finally, on June 20, we implemented the iSocialweb-style paginations.

Until that date, the client had kept all paginations in noindex except for the first page, with a rel="canonical" pointing to the main page.

As a result, listings linked from category pages that weren't on the first page received no strength at all.

Once the new pagination configuration was complete, in less than 48 hours we went from 8k indexed URLs to more than 50k without needing to do anything else.

In less than 48 hours we went from 8k indexed URLs to more than 50k

In other words, by opening up crawling for the search engines, it wasn't necessary to force any indexing. A very good sign.

By June 26 we already had more than 59k indexed pages and by June 29, 67k. This meant almost doubling the organic traffic since we started working on the fixes on June 9.

Outcome

Many times in SEO we say you have to wait a reasonable amount of time to get results. It's almost always months. However, sometimes it's just a matter of weeks.

As you can verify in this sequence of charts after the changes:

First, Google started crawling thousands of domain URLs it had ignored until then:

Google started crawling thousands of domain URLs it had ignored until then

Then it started indexing the content:

GSC crawl statistics, period from 29 March 2022 to 28 June 2022

And finally, ranking our content and sending organic traffic:

GSC Coverage, period from 29 March 2022 to 28 June 2022

A few changes to robots.txt and the indexing directives were enough to produce this rise.

And when you have:

It's relatively simple to change the configuration, open up crawling and get indexed.

Final thoughts on the SEO Indexing Case Study

When you open up crawling on a project like this, you have to be very careful, since you're bound to run into thousands of issues, cannibalizations, duplicate content, empty URLs or URLs indexed with no ranking value, etc.

Fortunately, due to the type of project, most of the unindexed content was original.

Which makes things much easier.

In addition, small design changes were made that helped improve CWV and optimize load times.

Significantly minimizing the risk of running into issues with the crawl budget Google had assigned to the project.

In fact, this is why many projects of this size crash and burn, or are afraid to open up crawling.

First because they fear exhausting their crawl budget, and second because once open, they're not able to manage everything that comes after: duplicate content, thin content, 5XX errors, and so on.

However, thanks to iSocialweb's experience, this is never a problem.

In reality:

So now you know, if you need help optimizing the performance, crawling and indexing of your website, get in touch and we'll help make it a success.