In the previous tutorial How page is ranked by search engine, I discussed how popular pages (as judged by links) rank higher. By this logic, you might expect that the Internet’s most popular pages would rank for everything.
To a certain extent they do (think Wikipedia!), but the reason they don’t dominate the rankings for every search result page is that search engines put a lot of emphasis on determining relevancy.
Relevancy is the measurement of the theoretical distance between two corresponding items with regards to relationship. Luckily for Google and Microsoft, modern-day computers are quite good at calculating this measurement for text.
Size of Google Indices
So what does this emphasis on textual content mean for SEOs? To me, it indicates that my time is better spent optimizing text than images or videos. This strategy will likely have to change in the future as computers get more powerful and energy efficient, but for right now text should be every SEO’s primary focus.
But Why Content?
The search engines must use their analysis of content as their primary indication of relevancy for determining rankings for a given search query.
For SEOs, this means the content on a given page is essential for
manipulating—that is, earning—rankings.
In the old days of AltaVista and other search engines, SEOs would just need to write “Jessica Simpson” hundreds times on the site to make it rank #1 for that query.
What could be more relevant for the query “Jessica Simpson” than a page that says Jessica Simpson 100 times? (Clever SEOs will realize the answer is a page that says “Jessica Simpson” 101 times.)
This metric, called keyword density, was quickly manipulated, and the search engines of the time diluted the power of this metric on rankings until it became almost useless. Similar dilution has happened to the keywords meta tag, some kinds of internal links, and H1 tags.
The funny thing is that modern-day search engines still work essentially the same way they did back in the time of keyword density. The big difference is that they are now much more sophisticated. Instead of simply counting the number of times a word or phrase is on a webpage, they use natural language processing algorithms and other signals on a page to determine relevancy.
For example, it is now fairly trivial for search engines to determine that a piece of content is about Jessica Simpson if it mentions related phrases like “Nick Lachey” (her exhusband),“Ashlee Simpson” (her sister), and “Chicken of the Sea” (she is infamous for thinking the tuna brand “Chicken of the Sea” was made from chicken). The engines can do this for a multitude of languages and with astonishing accuracy.
In addition to the words on a page, search engines use signals like image meta information (alt attribute), link profile and site architecture, and information hierarchy to determine how relevant a given page that mentions “Jessica” is to a search query for “The Simpsons.”
No comments:
Post a Comment