Search for unique content - Profit Hunter

Back to the problem of duplicate content.

Sooner or later, the question of where to get unique content becomes before each webmaster. It’s unprofitable to tyrit, to rewrite for a long time, to order on the side is expensive ... The vicious circle is obtained.

Eli, the author of the blog Blue Hat Seo, offers two solutions to the problem. This record is already more than half a year, so the topic probably slipped in RuNet. However, if it was, I definitely missed it

Archive. org

Archive site. The org is the perfect place to search for abandoned content. With it, you can browse the archives of many reputable directories of articles and news sites and find entries that once conquered the search engine tops, but have already dropped out of the list. For example, take the site of CNN. com.

1. Open the Archive website. org and enter in the search the name of the site you are interested in.

Search for unique content - Profit Hunter

2. Select an older date. The probability of old pages falling out of the search is very high.

Search for unique content - Profit Hunter

3. Select the desired category.

Search for unique content - Profit Hunter

4. Select an article relevant to the subject of your site.

Search for unique content - Profit Hunter

5. Enter the query site: _address_state into Google and see the result. Ideally, it should look like this:

Search for unique content - Profit Hunter

True, if you remove from the www request, then Google still finds 1 page, so this example is somewhat unsuccessful.

6. Copy the text of the article to your site.

That's all. The problem of unique content has been solved 🙂

For this method, select large reputable sites. So you quickly find what you need. Keep in mind that Archive. org does not always give the necessary pages, although they are present in its archive. In addition, some sites, for example, ezinearticles. com, close their archives via robots. txt.

If you are going to generate content on an industrial scale, the following method will suit you.


If there is a map on the site, you can easily find all the addresses of pages from this domain. When you have a list of addresses, you can get rid of it at the site: ... request and find the pages that fell from the index.

  1. Find the site map and parse the addresses of individual pages from it.
  2. Write a script that could get rid of all these addresses on the site request: ...
  3. If the search engine shows a result above zero, delete the address.
  4. Check the list of remaining addresses manually and find the articles of interest among them.

The disadvantage of this method is that the parsing of the card produces a lot of useless results, such as search queries on the site. To avoid this, select a folder or subdomain on your subject and work only with it. If you, for example, need automotive articles, select the map section that contains the domain folder. com / autos or autos subdomain. domain. com.

Alternatively, you can search for “unique” content in the cache of deleted pages. Many sites use the standard error page 404. Enter the query site: domain. Com "Sorry this page cannot be found” and check the cache of the same pages in other search engines.

Ethical point: Do not forget to put a link to the original (at least on the main page of the site). Despite the fact that for the search engine your content will be 100% unique, it still has an author who needs to be thanked in any way.

Related posts:

  • Linkbeiting - Separating the chaff from the grain
  • How to get rid of the punishment for duplicate content
  • Day 11 - Content above all
  • How to create an information product in 72 hours

Do you like articles? Subscribe to the newsletter!


Related Articles