Google Traffic Tips, Tactics and Strategies

Chapter 1: What Is Google And How Does It Search?

Google is really a multi-national, publicly traded company built round the company's hugely popular internet search engine.  Google's roots return to 1995 when 2 college students, Larry Page and Sergey Brin, met each other at the University of Stanford and collaborated on an investigation project which was to, in the course of time, get to be the Google internet search engine.


BackRub, (as it was known then due to its analysis of backlinks), stimulated curiosity about the college research work, but didn't win any bids from the main portal vendors.  Undaunted, the founders gathered up sufficient funding to start and, in September of 1998, began operations from the garage-located office in the Menlo Park area of California. In the same year, PC Magazine put Google in its Top one hundred Internet sites and SE's for 1998.


Google got chosen because of its similarity to the term googol -- a particular number comprising a number 1 followed by one hundred zeroes -- referring to the vast quantity of information on the planet. Google's self-stated mission: "to organize the world's information and make it universally accessible and useful."  In the very first couple of years of trading, Google's internet search engine competition included AltaVista, Excite, Lycos and Yahoo.


Within a couple of years, though, Google became so much more popular that its name has turned into a verb for conducting a Web search; individuals are as prone to say they "Googled" some information as they are to say they looked for it.  Whenever you take a seat at your pc and perform a Google search, you're very quickly given a summary of results from all around the web. So how exactly does Google locate webpages that match your search query, and decide the order the search engine results are shown in?  The 3 main aspects to providing search engine results are: Crawling, Serving and Indexing.


Crawling may be the process through which Googlebot discovers updated and new webpages to be put into its Google index.  Google makes use of a huge group of computers to fetch (or "crawl") vast amounts of pages on line. This program that implements the retrieving is known as Googlebot (also called a bot, spider or robot). Googlebot utilizes algorithmic processes: computer programs decide which websites to crawl and how frequently, and just how many webpages to retrieve from every website.


Google's crawl operation starts with a summary of web site URL's, generated from its previous crawl operations, and supplemented with Site Map data supplied by Web Masters. As Googlebot crawls all these sites, it picks up links on every webpage and adds these to its listing of webpages to crawl. Newly created sites, alterations to current sites, along with dead links are made note of and utilized to update Google's index.  Googlebot assesses every one of the webpages it crawls to be able to compile an enormous index of every word it observes and their position on every page.


Additionally, it processes information contained in main content attributes and tags, for example, A.L.T. attributes and Title Tags.  Whenever users enter a search query, Google's computers search their index for corresponding webpages and get back the outcomes they believe would be the most highly relevant to consumers. Relevancy is dependent upon over 200 facets, among that is the PageRank for the confirmed page which we will discuss now.

If you want to share and earn points please login first
Introduction (Prev Lesson)
(Next Lesson) Chapter 2: What Is Google Page Rank?
Back to Google Traffic Tips, Tactics and Strategies

No Comments

Give a comment

three × two =

s2Member®