Meet the translation of Ely’s BlueHatSeo article about creating gray websites promised in the article on SEO-empire.
So, gray SEO . Let's first understand what it is. Most webmasters rightly believe that at the core of gray SEO are dubious optimization techniques. That is, techniques that raise doubts about which site is in front of you: black or white. And these doubts are good, because if you manage to sow them in the soul of an ordinary user, then fooling the search bot will be no problem.
At the stage of developing a gray site structure, it is best to find and copy the structure of another site that in principle cannot be banned by search engines. Such a site, for example, is Digg. The structure of Digg is as follows: the site has several main categories, which are divided into subcategories where news is posted. Each news item includes a headline and a short (up to 255 characters) description. Each news entry is accompanied by user content (comments). If you look at this structure objectively, it looks very doubtful. There is very little content, it is not organized in the same way as informational sites are organized, and user content, as a rule, is poor and non-unique. In addition, no quality control, except, again, user votes, these "comments" do not pass.
But we know that Digg, Reddit and other social networks of this type walk in the great authority of search engines, so the latter’s anti-spam algorithms do not react to their poor structure, much less consider it a sign of black SEO. And it gives us a big advantage, on which our gray site will be built.
Now we have to decide on content sources. On the example of Digg, we know that the content of large social programs for the most part consists of headlines and excerpts from the news (the subject does not play a big role). And this is another plus, because Google does not consider the reprint of news as duplicated content, since the same news is published almost simultaneously on very many resources. This approach is quite logical.
So, getting the content is not a problem. You just need to pull it out of RSS feeds of popular sites. In addition, if we are going to copy the structure of Digg, we do not need the whole news - only their name and a small excerpt. We also need user comments.
Where do you get custom content? In this particular case, the best source of content is the same resource from which we tore off the structure, i.e. Digg. Let's take the heading of a typical digg news and remove common words from it, like this: why, but, I, a, about, an, are, as, at, be, by, com, de, en, for it is, from, how, in, is, it, it, la, of, on, or, that, when, where, who, will, with, the, www, and, or, if , but and other meaningless words.
Now we will hammer the remaining words into the search box and see the results. After that, we will only have to rip out the comments from the results, change the user names and make all this beauty more or less unique. For individual comments, you can even use Markov chains. Yes, not every comment will correspond to the topic of the news, but can it really cancel what it is written by a real person? And how can the “doubtfulness” of the site deprive it of its position in search results? No and no.
Read the typical comments on Digg: "Oh, cool article", "Cool", "+1" and other such nonsense is full there. Search engines have become accustomed to this content and tolerate it. Of course, this contradicts the very essence of the Internet, but this is what social networks have brought with them, and there is no getting away from it. Therefore, no matter what your content. It is important how you use it. If you, for example, take comments on a popular video on YouTube (comments are the main textual content of this site) and blind it into one text, what is the probability that the search engine will ban this page? That's right, huge. However, when the owners of the clones of the same YouTube break this text into separate pieces and make it in the form of comments, the pages live happily ever after. Keep this in mind.
So, we already have two elements of a successful gray site. We can even create and tie to the site a simple voting system and posting news. It does not have to work accurately and objectively. The main thing is that at first glance it looks believable. The site will be 100% auto-filled, but there is no reason why you cannot simultaneously generate hundreds of thousands of pages with ripped content and remain "white and fluffy."
The same approach can be applied to any trust site. Let's go back to YouTube again. YouTube is not the first or the last video site on the Internet.However, it is very popular and very well ranked by search engines, so well that it can easily surpass your 100% unique and optimized article in 1000 words. You will say that back links play a role here. Perhaps, in the particular case of YouTube, this is true, but why then do his clones feel so good, whose pages, although not rising high in the issue, but on the other hand are practically not banned for spam comments?
Now go to the question of backlinks. In terms of link building, gray sites have an advantage over black projects. They can be tested by humans, so their links will last longer than links from black sites. As for the links to the gray sites themselves, I would start with the trackbacks to the news sites, from where you will get the news. If the link to your clone appears on a reputable resource, cool, if on a small thematic blog, is also good. Since all this is legal, and our gray site looks quite decent, such a reference campaign can be quite successful. If you want, send comments. Together, these two basic black tactics can give a good reference weight. Remember that the main thing is to make the visitor doubt for a minute that your website is not just another black project. If you succeed, consider that half the work is done.
Now let's talk about how you can make such a site even more unique and raise it in issue. First of all, I study my competitors. Since all the content on the gray site is a reprint from other (often authoritative) resources, it is unlikely to be able to rise above the original. Of course, some kind of traffic from search by article headings will still come, but these are not at all the volumes that we need. This is due to the same headers. If I change them at least a little bit, the traffic from the search will increase. For example, there is such a title: "Hilton's Chiwawa Caught Snorting Coke In The Background Of Sex Video". At the stage of importing headers, I can add a script that will replace some words with others, for example, Hilton at Paris Hilton or Congressman Paul at Congressman Ron Paul (a huge database of celebrity names can be found on the IMDb website). In addition, you can tie the base of synonyms to the script, so that it replaces selected verbs or nouns. Here, of course, you will have to move gyrus and work a little, but in the future such improvements can bring good profits. Who knows, maybe you are lucky, and you jump out to the top for a slightly distorted request for some popular news.
That's all about gray sites. Good luck!
Webgid. Studio - website development in Ukraine