If you have read some of my posts over on the GSA Forum there is a chance you may have noticed that I always recommend people try to stick to do follow only links in their link pyramids. This is partly due to both Google (here) and Bing (here) confirming that no follow links are discounted from the link profile of a page by their crawlers but also due to an old case study I completed years back.
Essentially I set up two test campaigns that had indexing tiers attached to them. One pulling its targets from a no follow list and one pulling its targets from the do follow list. I used the filtering method I explain here to ensure that all of the targets in the two folders are either do follow or no follow to try and keep the test data as accurate as possible.
I recently decided to rerun the test for two reasons. Firstly, I wanted to confirm the results were still the same for my own background knowledge and secondly, I wanted something to link people to when they ask my why I take the time and effort to filter out the no follow links.
How I Plan To Run The Test
As you may have guessed, I am going to use my list filtering method to create a clean verified folder of no follow links that the indexing tier project for this case study can pull its targets from. I will enable the blog comment, guestbook and image comment platforms for the project and then leave the project to run building 25 links per day to each URL held in the project.
I will create a batch of automated web 2.0 pages for the project to build these no follow links to, after four weeks I will alive check the web 2.0 batch and then index check them to see how the no follow links have performed. No other links or premium link indexing service will not be enabled for this automated web 2.0 batch to keep the results for the case study as accurate as possible.
Running the test in this way will allow me to directly compare the results of this case study to the results of my index service case study here where I completed a similar process using do follow only links in an attempt to gauge how effective they were at indexing batches of URLs.
Time For A Prediction
As I have completed this test previously I have a rough idea of how the no follow links are going to perform. If I had published this case study a week or so earlier I would have predicted that the do follow only indexing tier would beat the no follow indexing tiers performance massively but as you can see in my indexing service case study the indexing rates of do follow indexing tiers have dropped massively recently.
Although I can’t remember the ball park figure of indexed links from the no follow test batch when I performed this test in the past I would still predict the do follow batch will beat it even though it only managed to achieve an indexing rate of 19% recently!
The Results Are In!
As you can see the no follow indexing tier managed to index 276 out of 2118 alive URLs at the end of the test working out at around just over 13%. Although I had originally planned to run this test for four weeks I decided to cut it short to three weeks to keep it in line with my other case study with the do follow only link batch to enable a fairer comparison.
What I Make Of It
Well, the do follow indexing tier managed to index 19% of its web 2.0 batch while the no follow indexing tier only managed to index 13% leaving a difference of only 6%. Although I no longer have the results for the first time I completed this test a few years back I would imagine the do follow indexing tier would have achieved an index rate of around the 70-80% mark as that is what I was used to achieving with the method until around June 2016.
Although the results for the do follow indexing tier has dropped recently I remember my previous no follow indexing tier to be low enough to make me feel like using no follow links was a waste of my time so I would imagine it performed similar to how it has in this case study.
It would appear that no follow links do offer some benefit with regards to indexing links as this case study I published regarding “natural indexing” suggested a web 2.0 link batch left to its own devices will only achieve an indexing rate of around 2%. That being said, you have to ask yourself is it worth your time, effort and resources to use indexing tiers these days to achieve indexing rates of less than 20%?
Although the results may look close enough to not bother filtering your lists you have to take into account that both Google and Bing have confirmed they disregard no follow links meaning it is still beneficial to filter your lists for contextual targets.
As the premium indexing service I use managed to achieve an indexing rate of 98% in my indexing service case study I have begun to push all of my links through it rather than creating a do follow indexing tier for my link batches. This is just personal preference and some users may have tweaks to the indexing tier method to achieve a much higher indexing rate than I currently can so remember to test, test and test some more!
I hope this post inspired some of my readers to go out and check the indexing rate of their contextual links and consider changing the method they rely on to get their links indexed within Google as in my opinion a link only seems to offer a benefit to your pyramid when held in the Google index.