By: Saikat Sarkar
At its simplest level, search engines do crawling of web pages and index them to their dynamic database to retrieve search queries. To do this task, search engines depend on a particular in-built tool called spider, which is used to follow links from one page to another, from one root to another. Spider is also referred to as Bot. For Google, it is known as GoogleBot.
When you submit your site to Google, you are basically requesting GoogleBot to crawl your page for the purpose of indexing. However, this is somewhat less superior than most of the human edited directories, since this type of automated indexing system depends on fully automated SERPs.
This is why even automated indexing system visits human edited directories in order to design their ranking algorithm. Just for an instance, Google is a regular visitor of DMoz directory.
Different Elements of Search Engines
Spiders / Bots
Spiders or Bots read the source code of the page which includes reading the tags, and analyzing the structure (inclusive of internal and external linking structure). However, you must understand that the algorithms followed by modern search engines are too smart, so you just can't make them fool by designing codes for machines. But they read the way a user reads a page.
Data Centers
This is the database of the search engines where data / instances of web pages are stored and retrieved depending on particular search queries.
Indexer
Indexer is the ordering system which defines how search engines list information on the basis of particular on page and off page elements such as tags, internal linking, back linking, other types of page formatting, etc.
Algorithm
This is a complex mathematical calculation that determines the weight of a website, precisely a page. It depends on a wide variety of factors and no one knows the exact algorithm of search engines. And it is also dynamic in nature as search engines constantly modify it in order to fight off spam.
User Interface
This is the visible element of search engines. Quite obviously, the user interface of search engines must contain search box where users can insert their search queries and press hit. Once the hit button is pressed, search engines retrieve results depending on the search queries.
Relevancy of Search Results Explained
Search engines depending on crawler based mechanism may retrieve non-relevant data sometimes, though today's' search engines are much smarter than their ancestors. So the possibilities of relevancy are much accurate than the olden days. Human edited directories retrieve more accurate results as human intervention cannot really be substituted by automation.
Search engine bots analyze specific content of a page and match them to the search query to retrieve results. However, search engines these days put a greater emphasis over off-page factors, which are usually difficult to influence by the webmasters, in order to do the ranking of the pages. This is also referred to as link popularity and lead to more accuracy and relevancy.
About the Author
Saikat Sarkar, the CEO of 6th Vedas – a leading name in the field of internet based promotional marketing, offers valuable information on his search engine optimization resource blog.
(ArticlesBase SC #3786368)
Article Source: http://www.articlesbase.com/ - Search Engines – How Do They Work?
No comments:
Post a Comment