How Search Engines Work – SEO Leeds

Key search engines like Yahoo and Yahoo, run computerized crawler programs like ‘bots’ and ‘spiders’ to get web pages across the world, they utilise the hyperlink structure of the web to crawl web pages across the world extensive web.

There is certainly an believed 20 billion pages around 10 million which have been crawled by search engines. Once a site has been crawled its internet pages are then ‘indexed’ into a huge database which these search engines use to produce documents after a search request. The results returned are centered on algorithms (a computation used to create organization of documents). It can be these methods that SEO specialists focus on to help improve a websites search engine rank position (SERP). 

The search engine request will use the conditions or terms found in the search need and match them to documents in the index database containing the same conditions or phrases in the manner specified by the user. Using Yahoo for instance they have 14 different manners of looking, below are the most famous for a full list visit Google.

Basic search: Help find what I’m looking for- This will come back all documents containing these specific terms.

Phrase Search: “Help me find what I’m looking for”- This kind of will return all documents containing the specific phrase with the term in the specified order.

Search in a Site: Help me personally find what I’m looking for: bbc. co. uk- This will return all documents from indexed webpages in the BBC website containing the specified conditions N. B. you can also specify a course of sites e. g. Assist find what Now i’m looking for:. org will return results from ‘. org’ sites only.

Removing from the total terms: Assist find what I’m looking for -don’t- This search will come back all documents as in the basic search but actually will exclude the word ‘don’t’ as this has a minus indication immediately before it.

Humps and Walls Analogy For any search engine to gain access to and read your content successfully you need to make sure you avoid any factors that may impede the crawlers’ chances of accessing your content. They are commonly known as Speed Bumps and Walls. As search search engines rely solely on the hyperlink architect of the web to find new documents and content as well as updating any changes since its previous visit (to view as soon as your site was last stopped at you can use a program like Google alexa plugin and ‘view cached overview of page’). A bundle is referred to as complex links and deep site constructions with little original content; usually just a three to four links into a website is too deep for the search engines to crawl effectively as it slows down there indexing process. A wall is identified as data that are unable to be accessed by index able links.

Methods to avoid:

Complex URL’s; at the. g. <a target=”_new” rel=”nofollow” href=””> </a>
Having more than 100 unique connect to other pages on anyone site (less likely that every link will be followed)
Pages that are accessible from a submit form or button
Pages that are buried more than 3 clicks/links away from the homepage (unless your site is known by links pointing to your site spiders will dismiss deep pages)Separating Pages into ‘frames’ confuse spiders in which pages to get ranking in the results
Internet pages that require a dropdown menu
Pages requiring a “session Id” or permit cookies (unlike browsers lions do not preserve this information)
Documents accessed using a search box only
Papers restricted in the Software. txt or Meta marking
Hyperlinks with the “nofollow” attribute set
Redirecting web pages which in turn not show the content before being sent straight (this can seriously injury your ranking result or even get a site banned)
Pages only accessed via a secure sign in