As A reader of my blog, you are able to save 15% on the retail price of any GSA tool! To get this discount use any of the links below or in my posts then use the coupon code “shaunmarrs” on the checkout page.
Since publishing my guides on how to build your own auto accept list I have have a number of people asking me what my opinion is on reverse proxies, why I didn’t include them in the guides and how to use them in GSA Search Engine Ranker. Due to this, I decided to publish a post answering all of these questions in one.
What Are Reverse Proxies?
Also known as rotating proxies and back connected proxies, reverse proxies are a specific type of proxy that provides the user with an initial static frontend address with a backend address that automatically changes at various intervals. Various providers allow their users to set different intervals for their backend proxy addresses to change to adapt them to their requirements.
My own personal experience with them was to scrape search engines, specifically Google at very high link per second scrape counts. Their main advantage is that the backend address are from polls of tens of thousands of addresses increasing the chance that you will have access to an address that has not been soft banned from Google allowing you to scrape continuously.
My Opinion On Reverse Proxies
Although they may have their uses for other internet marketing methods I personally believe they are a total waste of time and money for the vast majority of people wanting to build their own auto accept list for automated tools.
I can think of a few exceptions to this, if you are wanting to build your own sets of premium lists to sell to a user base then they may be worth your time to scrape your initial list for processing. Additionally, if you are using custom footprints with high yield counts to build a higher quality target list then they may also be worth your time due to the number of initial targets you will require to build out your list.
On a quick side note, I am not saying reverse proxies are useless. They are becoming an integral part the most recent forms of link indexing that require direct submission to Google and I would imagine they have other uses outside of auto accept list building. My main reason for thinking they are a waste of time for users using the methods I published to build their own auto accept lists is their price.
The screenshot is taken from a provider chosen at random.
If we disregard the claims of the 50% discount on those list prices in the screenshot above, a user would have to spend $24.99 per month for a grand total of five reverse proxy ports for scraping. In my opinion, this is far too expensive when you can use tools such as GSA Proxy Scraper, GSA Search Engine Ranker, and Scrapebox to scrape your own public proxies for the one time fee of the tool. In my experience, these free public proxies are enough to scrape enough initial targets for the identification proxies for most people, especially when combined with link extraction.
The second reason I feel they are a waste of time and money for the general user building an auto accept list is the limited number of targets there are to scrape. Many of the stock footprints that come with GSA Search Engine Ranker have a majority of false positives and the ones that are worth scraping have a limited target pool of less than five figures. In my opinion, the public proxies available from the tools mentioned above are enough to scrape these targets once you have filtered your footprints as explained in this post.
How To Use Them In GSA Search Engine Ranker
Most of the questions I have received about using reverse proxies in SER are based on how to add them to the tools proxy pool but separate them from your regular premium proxies used for your submissions.
There is a relatively easy workaround for this provided you are only using the regular premium proxies for submissions and want to use the reverse proxies for scraping in SER without using any public proxies.
The first step is to add your premium proxies to the tool and set them as private proxies, then add your reverse proxies to the tool and set them as public proxies. If the tool automatically ads them as private proxies then select your reverse proxies in the tools proxy pool and then right click them selecting the “Toggle Private/Public” proxies as shown in the screenshot below.
Then go back to your regular submissions tab in the tools options menu and set the search engine proxies as public as shown in the screenshot below.
Although technically the front end addresses of reverse proxies are private, SER will now read them as being public and use the proxies marked as public proxies for its search engine scraping meaning it will scrape with your reverse proxies.
That concludes my thoughts on reverse proxies and how you are able to use them to scrape in SER while still using premium proxies for your submissions. I hope this has helped any of my readers with similar questions to build their own auto accept lists.