RankBrain is Google's new self-learning issue build system, which searches for relevant answers to a user's request based on factors such as query history, user behavior, page context (learn more about LSI indexing technology).
At the end of October, news came out about the new Google RankBrain self-learning system, which, together with other ranking factors, helps to determine the most relevant pages for user search queries.
To be more specific, RankBrain processes and refines requests using recognition patterns for complex and ambiguous (ambiguous) key phrases and defining them into specific topics.
This allows Google to show better search results, especially when it comes to hundreds of millions of queries per day. Search engine staff noted that RankBrain is one of the most important ranking factors the algorithm takes into account.
RankBrain is one of the hundreds of signals that determines which results should appear in the results and how they will be ranked. It is an attempt to improve search results based on the technology of the Knowledge Graph and object search:
So what is an objective search? How does it work with RankBrain and in what direction does Google go? To understand this, you need to go back a few years ago.
The launch of this algorithm was a radical change. It was a reconstruction of how Google processed organic queries: from searching for strings of characters to finding the objects themselves, for which a specific value, properties, and relationships with other objects are defined.
How did the Hummingbird appear? A new algorithm was born from the attempts of developers to embed a semantic search in the Google search engine. In other words, they wanted to make a self-learning machine that could understand human natural language (NLP technology). It was assumed that the search engine will understand what you mean when you type your query.
The purpose of the semantic search is to improve the accuracy of the result by recognizing the user's intentions and the contextual meaning of the terms in order to generate more relevant results. Semantic search systems take into account various parameters (including context, user goals, variations of words, synonyms, generalized and narrowly thematic queries) to give the most accurate answers.
Two years have passed, but everyone who uses Google understands that the dream of semantic search has not yet been realized. Although some attempts in this direction are already being made. For example, databases are used to define and group objects into groups by value. However, the semantic engine must understand how the context affects words, and be able to determine and interpret their meaning.
Google is not yet able to understand natural (human) language, although it is able to perceive known objects and relationships through definitions.
Of course, a search engine can learn a lot of concepts and relationships after a while, if enough people search for some combination of terms. This is where the RankBrain machine comes into play. Based on the experience gained, she tries to make the most relevant assumption when forming the issue.
So, by definition, Google is not a semantic search engine. Then what is it?
Today’s search engine has excellent abilities to show specific information. Need a weather forecast? Traffic information? Reviews of restaurants? Google can provide this information, eliminating the need to visit third-party sites. From just gives the answer at the beginning of the page issue. This is possible thanks to the graph of knowledge.
The movement from "lines" to "objects" is excellent, when you need to find an answer to questions starting with Who, What, Where, When, Why, and "Like" . Moreover, guided by information from the Knowledge Graph, Google is able to provide users with information that they did not even know about, but which could be useful to them.
However, this jump in the direction of "objects" has drawbacks. Although the PS is good at providing specific, database-based information, it is not yet so successful in finding the most relevant answers for complex compound queries. Such queries often consist of terms that are loosely coupled. Google is difficult to combine them into one "object".
As a result, when you specify a certain set of complex terms in your search, it is most likely that you will get only a few relevant results, and the degree of this relevance will not be very high. For the most part, issuing is a collection of random options, but not direct answers. But why is this happening?
As we have said, it’s difficult for Google to find suitable answers in queries that have some poorly related terms. PS is not able to understand and establish relationships. In this case, RankBrain technology makes an assumption about the relationship between these terms, guesses them.
Try printing a composite query using the drop-down list of options. Choose the most suitable of them. You will see that the queries proposed by Google themselves give more accurate results in the issue. This is because the objects themselves in the request and the connections between them are known to the search engine.
By the way, what is meant by the word "object"?
These are nouns: people / places / things / ideas, etc. Their meaning is defined in the databases that Google refers to. PS acts as a huge digital encyclopedia. However, if the two objects in it are not related to each other, the machine has difficulty in understanding the user's request. She just makes a guess.
Google is trying to translate the words that appear on the pages into “objects” that mean something and have properties. This is what the human brain usually does, but in the case of a computer, this is called “artificial intelligence."
This is a difficult task, but the work is already underway. " Google builds its own understanding of what those or objects and what people should know about them, "said Amit Singhal, a software engineer at