Search engines and directories can be one of the most effective ways to acquire visitors to your website. This document is a proven approach to effectively promoting your site through search engines and directories.
When completed, you should be able to:
- List your site with search engines and directories.
- Make your site visible (indexable) to search engines and directories.
- Prevent problems that could cause your site to be delisted or banned from a search engine or directory.
- Begin receiving visitors from search engine and directory referrals.
Search engine and directory systems are constantly changing. The following was written and reviewed by leading search engine experts with the most current information available.
back to top
INTRODUCTION TO SEARCH ENGINES
Search engines, directories and PPC-engines
Search engines are usually categorized into the following groups:
- Search engines
- PPC-engines - pay per click search engines
Search engines use computer programs ("spider") to browse through the entire Internet, read pages, and automatically index the content into large searchable databases.
Directories are human powered hierarchical structures of links to websites. The largest directories are Yahoo, LookSmart, and ODP.
PPC-engines have greater similarities to other online advertising products than traditional search engines. The biggest PPC engines are Google, Overture, eSpotting, FindWhat, and Kanoodle. In PPC-engines, users bid for the position they want in the search results. The highest bidder receives the top result, the next highest bidder receives the second result, and so on. You can learn more about PPC-engines at Pay Per Click Search Engines and Search Engine Watch.
The rest of this tutorial will focus on search engines and how to optimize your web sites to become more visible and to attract more relevant, interested visitors. We hope you find this is a valuable tutorial on how to build a search engine-friendly website.
Work out a long term strategy
Generally speaking search engine optimization is a long-term strategic tool rather than a tool for short-term campaigns.
There are many ways to increase the number of targeted visitors your website receives from searches. You can optimize your website, use inclusion programs, pay for business express reviews in directories, purchase sponsor links or bid for positions in PPC-search engines.
For each of these options there are even more options. What optimization strategy do you implement? What PPC-engines can benefit your business and what tools do you choose to use?
This tutorial will focus on search engine optimization. However, we encourage you to look at other available methods of acquiring qualified targeted visitors from search engines.
back to top
Three steps on the way to success
The following will detail three key steps involved in effective search engine optimization:
- Optimizing your website
- Submit and index pages
- Monitor and analyze results
After the tutorial we have included a FAQ of search engine optimization problems.
OPTIMIZING YOUR WEBSITE
Search engine optimization
Every search engine has its own algorithm that determines the ranking of documents within search results. A search engine algorithm is a complex set of computer-based rules for how to interpret and rank web pages when users do a search for a particular word or phrase. For obvious reasons, search engines do not publicly reveal the detailed logic behind such algorithms.
There are lots of similarities in the way search engine algorithms work. To learn more about how search engines work or how to improve your rankings, visit Search Engine Watch to find information on search engines and their rules.
The three most important elements of search engine optimization are:
- Links and navigation
- Text and META-data
- Popularity measurement
Keep your website online
With the billions of web pages search engines have to visit each month they usually only have time to request each web page once. If your web server is not accessible at the time the search engines visit your website, your web pages won't get indexed. Don't panic!
Most search engines crawl the web on a regular basis. If some of your web pages have been dropped because your server was down, they will probably get back into the index the next time the search engine visits your website.
Introduction to search engine spiders
Search engines use small computer programs to browse through and read web pages on the Internet. Such programs are often referred to as "search engine spiders" or "search engine robots". When a search engine spider visits a page it first reads the text on the page and then tries to follow links from that page off to other web pages and websites.
You can think of a search engine spider as a very simple form of automated browser. It traverses the Internet like a human would do with a browser - just many times faster and fully automated.
Unlike ordinary browsers, search engine spiders do not "view" web pages - they simply "read" the content and click the links. Search engine spiders do not enjoy or appreciate the great design and "cool" images on your web pages.
HTML code is the most fundamental part of every web page on the Internet. In a browser, you don't see the actual HTML code, but only the final result - the web page as it should look.
Search engine spiders, however, can't see the web site's final result, and instead, focus on the raw HTML code.
Most browsers can interpret HTML code that has errors and still show a nice looking page. However, you never know how a search engine spider will interpret the same errors. It may skip the part that uses incorrect syntax, or it may skip the entire page or website. There is no way to predict what the search engine spider will do when it encounters code it doesn't understand.
If search engines cannot decode your website, follow the links, and read the text, your site will not get indexed. It is not enough to validate your web pages by looking at them in your browser. The HTML syntax needs to be correct - you need to validate your HTML.
Some editors have built-in HTML validators. Use them! If you are using Search Engine Submission, you will have access to the built-in HTML syntax checker. The syntax checker is a simple version of a full HTML-validator and will serve as a useful tool to find and correct errors in your HTML code.
If you do not have access to any of these validators, we strongly recommend using the free HTML validator or one of the many validators you can find on the Internet.
Stay away from "alternative HTML"
Some search engine optimization "experts" will advise you to use non-standard HTML to squeeze in extra keywords in hidden places in your HTML code. Do not do that! Search engines penalize websites that use hidden keywords either by de-ranking the websites or erasing them completely from their index. Always use valid HTML.
back to top
Links and navigation
This section examines how to build links that search engine spiders can follow.
A good navigation structure makes it easier for search engines spiders and users to browse your website.
If it is easy for users to navigate your site, they will most likely return. If it is easy for search engines to follow your links it is more likely that they will index many of your web pages.
The more web pages you have indexed, the more likely it is that one of them will be found in a search, thus leading a prospect to your website.
Use hyper linking that search engines can follow
Search engine spiders can't follow every kind of link that webmasters use. In fact, the only kind of hyperlinks that you can be certain all search engine spiders can and will follow are standard text-based links.
In HTML code, text-based links are written like this:
<a href="http://www.domain.com">Link text</a>
The link will look like this on a web page: Link text
All other forms of links can pose a problem for search engine spiders.
If you are using a WYSIWYG editor or content management system, it will usually output the correct HTML code for you when you insert a text link. You can verify that it is indeed a text link by highlighting the link text in your browser. If you can highlight it letter by letter like most other text on a web page, then you should be safe.
Image maps or navigation buttons
Sometimes designers want to use graphics for navigational links. There are two common ways of making graphical links that are very different: Image maps and Navigation buttons
Image maps can have different areas of one image linking to different web pages. That is a very common way to build top- or side-navigation bars or country maps.
Most search engines do not read image maps, so you should always add additional text-based links on each page for search engine spiders to follow. The text-based links will also be good for visitors who do not use a graphic capable browser, or are surfing with graphics turned off to save bandwidth.
Navigation buttons are single images that each link to a specific web page. Most search engines spiders will follow this type of link. However, if you want to be absolutely sure that all search engines can and will follow all your links, we still recommend adding pure text-based links in addition to any kind of graphical links.
Flash, Macromedia and Java Applets
Flash, Macromedia, and Java Applets have something in common: They all require the user to have a special program installed on their computer to open and play the files. Most new browsers come with the necessary programs installed to play the most common file types. However, search engine spiders usually do not open such files.
We recommend that you do not build your navigation links using any of these techniques. If you do, you should only serve the high-tech version of your website to users that have the right plug-ins installed, and have a low-tech version ready for all other users. The low-tech version should include regular text-based links. This will work well with the search engine spiders as well as the users that do not have these plug-ins installed.
If you are using techniques to present navigational links that require the user to have a certain plug-in installed, you can add standard navigation text-links for the search engine spiders to follow in the footer of each web page.
Java Script and DHTML
Thus, do not put important things like your main navigation in Java Script. And if you do, make sure that the same links are also available as standard text-links somewhere on your web pages.
Non-HTML formats (PDF, MS Word etc)
Even though some search engines have started to read and index non-HTML file formats such as Microsoft Word, Microsoft Excel, and Adobe PDF (Portable Document Format) files, none of the search engine spiders will follow links within these formats. If you want search engines to follow links on your website, they should always be placed in your regular web pages - your HTML documents.
Do not place text in such formats if you want people to find it in search engines. At least make the content accessible in HTML format too - as a regular web page. Not all search engines will index these non-HTML formats and many will make more errors indexing them.
back to top
Text and META-data
If you want good rankings in search engines, you need good content. Content is King! The more good content you have the higher your chance of being found. It's like a lottery. For each piece of valuable content you have, you have a ticket in the "search lottery". The more tickets you have, the higher your chance of having someone click on your link.
On a web page, some text can be seen as part of the visual presentation and some text is "hidden", including META-tags, title-tags, comment-tags (something programmers use to help them navigate the code) and ALT-tags (text descriptions of pictures).
Search engines have reduced the amount of importance they place on META-tags, with the exception of the title-tag. More weight is placed on the visual text - the text that users will see when they arrive at the web page.
The title-tag is technically not a META-tag. It is the most important HTML tag on your site. The title-tag displays the site's name, which will appear in the top of the browser. All major spider-driven search engines consider the keywords found in this tag in their relevancy calculations. You should pay special attention to this tag, as it carries a lot more weight than most other objects of a web page. You can read more about how to write good title-tags in the next section.
Introduction to keyword research
Before optimizing your website, you need to know what keywords your target group is using. Your keywords are the words and phrases that people might use to find your products, brands, services, or information, via search engines.
You can probably develop a few ideas very quickly. If you run a travel portal, keywords such as: "travel", "vacation", and "holiday" might be good ideas. In fact, a deep research into the topic of "travel" would probably show more than 100,000 different keywords people use when they search for travel information.
Does that mean you have to target all those keywords? No, absolutely not. You should use such research only as a good starting point. Use it to learn the search behavior of your audience. Find out what they call things, how they identify subjects, how precisely or broadly they generally search.
You may discover that the majority of searches for "travel" come from people using combinations of "travel" and a certain city, country, or region. Or you may verify that people search for "travel" more than "vacation".
All this research should be used in the way you tailor your web pages and the way you write your text, titles and META-tags. If most people in your target group use the term "travel" to describe your product, so should you.
Your most important keywords will often include some of the following:
- Your company name, including synonyms and abbreviations
- All product and brand names that you feature on your website
- Countries, regions, or cities you have an association with - often in combination with the one of the words above
- Relevant generic terms for your business, brands, or services (e.g. car, house or boat)
- Combinations of 2 and 3 keywords - many people search for phrases rather than single words
We recommend using our Keyword Generator; this tool wil lhelp you find the best keywords to use in your site.
Adding keywords to your website
Now that you know the keywords you want to target, it is time to implement the keywords on your web pages. The big question for many people is: How do I do that?
Basically you implement your keywords by using them on your web pages. It's that simple! The first and most important thing to do is use the words or phrases throughout the text on your web pages.
Some people confuse adding keywords to a web page with the META-keyword tag. As you will learn in the next section, the META-keyword tag does not count much in how well a web page ranks, so adding your keywords here and nowhere else won't help you at all.
Each section of your web page carries a different weight in the overall ranking process. In the next section you will learn the parts of your web pages that require special attention.
Focus on a few keywords for each web page
In most cases a web page will only rank well for a few keywords - the one that search engines determine to be the most important for that page. If you run a travel portal you cannot expect to rank well for the thousands of travel related keywords people are using if you only optimize your front page. Instead, you have to optimize each of the web pages on your website that have content people are looking for - the keywords you have developed through your research.
Important places to use your keywords
Search engines weight keywords according to their placement on a web page. There are some spots that are more important than others - places where you should always remember to use the most important keywords for each web page.
Once again, always remember to only use keywords that are relevant to each of your web pages.
- Page titles
The title of your page is the single most important place to have your keywords.
Headlines carry more weight than the rest of the text on your pages. This is because text found in headings usually identifies a particular theme or section of content. The headlines have to be formatted using the HTML-tags <H1> to <H6> to be identified by search engines.
- Body copy
Most people forget this is the most obvious place a search engine looks for relevant content. You have to use the keywords in the main viewable text on your web page. This is what the users will actually see, whether human or machine. If the keywords are not on the viewable page, then they should probably not be in any other area of the web page.
The words that are hyper linked on your web pages are sometimes weighed more heavily than the rest of the words in the main body text. So if you want to rank well for "pet shop Boston" you should use that phrase as a hyperlink somewhere on your page.
META tags should contain the keywords that appear on the page. As a general rule, if it is on the page, include it in the META-tags. However, the page will not rank well on their use alone. You can read more about META-tags in the next section.
The ALT-tag is also called "alternative text" and is used to describe images. You can see the text when you move your mouse over an image on a web page (that is, if they have added the ALT-tag). Some search engines read and index the text in ALT-tags but the weighting given varies from engine to engine.
Introduction to page titles
The page title is the single most important piece of information on your page when it comes to achieving good rankings in search engines. The words you use in your title carry much more weight than anything else on your page. So use this limited space wisely.
Page titles are written this way in HTML code:
<title>Page title here</title>
The page title-tag is placed inside the header tag in top of the HTML page:
<title>Page title here</title>
You should limit your page titles to 50-60 characters as most search engines do not read or list more than this.
How to write good page titles
First, make sure that all your web pages have unique titles. Go through each of your web pages and write a title that makes use of the most important keywords. The page title appears at the very top of your browser; it may or may not be the title on the front of your page.
Keep in mind that the page title is what the users will see as the hyper linked title in search results, so the phrase should trigger users to click on the link and visit your site!
The goal is to make titles that make people click and make use of your primary keywords for each page. If you want a page to rank well on "dental services Chicago" make sure to use those exact words in the title. Maybe a title like "Dental Services in Chicago - open 24 hours a day" would work well for you.
To find the right balance between the use of your keywords and writing a catchy title that makes users click, we suggest you write a few titles for each page. Don't think too much about each of them - just write them down on a piece of paper. When you compare them side by side, it will often be easier to pick the best title.
Introduction to META-tags
META-tags are pieces of text placed in the header of your web pages. Text in a META-tag is used to describe the page. META-tags are often referred to as META-data, which means "data about data", or in this case, data about your page.
META-tags can help search engines better understand and present links to your website. Thus, it is highly recommended that you fill out the META-tags on ALL of your web pages.
META-tags that are important to use
There are many kinds of META-tags and standards, but the only two you need when optimizing a website are META-description tag and the META-keywords tag.
The two META-tags are written this way in HTML code:
<META NAME="DESCRIPTION" CONTENT="Put your description here ….">
<META NAME="KEYWORDS" CONTENT="Put your keywords here ….">
The META-tags are placed inside the header tag at the top of the HTML code and just below the page title:
<title>Page title here</title>
<META NAME="DESCRIPTION" CONTENT="Put your description here ….">
<META NAME="KEYWORDS" CONTENT="Put your keywords here ….">
Adding META-tags to your web pages
How to add META-tags to your web pages depends on how you maintain your website. You should refer to the manual of the editor or content management system you are using for further detailed instructions.
In most cases there will be a simple box somewhere with two fields to fill out: The META-description and the META-keywords, or just "description" and "keywords". Your program will usually take care of inserting the necessary HTML code in the right place for you.
If you are coding your web pages by hand or in a simple HTML editor, make sure you do not make any syntax errors. If the META-tags have not been coded correctly search engines will not be able to read and use them.
Writing good META-descriptions
The META-description is used by some search engines to present a summary of the web page along with the title and link to the page.
Not all search engines use the META-description. Some search engines use text from the regular text on your web pages and some use a combination. All you can do is make sure you have valid META-descriptions on ALL your web pages so that the search engines that use them have something to read.
You should limit your META-descriptions to 150-200 characters as most search engines do not read or list more than this.
Writing good META-keywords
META-keywords are very often misused and misunderstood. Some webmasters put hundreds of keywords in this tag in the completely unrealistic hope of receiving good rankings for everything listed. Some webmasters even include keywords that have absolutely nothing to do with the actual web page or website.
That has forced the search engines to lower the META-keywords importance in the determination of what a web page is about. However, some search engines still read and use the META-keywords. Thus, remember to write relevant META-keywords for all your web pages.
You should write a unique set of keywords for each web page. You should NEVER copy the same set of keywords for all your web pages.
Do not add keywords that are not 100% directly related to each web page. To be safe, only use keywords that are found in the visual text on your web pages.
It is often discussed whether or not to use commas between keywords. Most search engines will not care as they remove the commas before reading the keywords, but some search engines might still use the comma to find exact keyword and keyword phrase matches.
In any case, the META-keyword tag does not carry much weight in the overall ranking algorithm for any of the major search engines and you will never get penalized for either using or not using commas.
Some people use commas because it makes it easier to read in the raw code. This can be helpful if you want to edit the META-tags by hand at a later time.
The limit for the META-keyword tag is 1000 characters but you should never add that many keywords. Insert the 3 to 5 most important keywords for each given web page in the META-keywords tag - no more! The more keywords you use, the less weight they each carry.
Will META-tags give me top rankings?
No, META-tags are not magic bullets. You will not achieve top rankings on keywords if you only use them in your META-tags.
However, we still recommend you fill out your META-tags as some search engines do read and use the data. In those search engines it will give you a better chance of receiving relevant visitors if you have correct, optimized, and relevant META-tags on all your pages. Remember, only those words that appear in the main body of the page should be in the META-tags.
back to top
For a search engine to understand which pages on the Internet are the most relevant to search users it is simply not enough to look at the content on each web page. Most search engines today analyze both link structures and click-streams on the Internet to determine the best websites and web pages for any given query.
Convincing people to link to your website is not something you do overnight. It takes time. By implementing a strategy to improve your link-popularity, you can gain a long-term advantage. Similar to building a good reputation or a trusted brand, it takes time. But once you have that "trust", the value is high.
Getting the most relevant websites to link to you will not only be good for your link-popularity, as measured by search engines, it will most likely lead to many good visitors coming directly from the referring websites.
The following list can serve as an inspiration for where to look for relevant links
- All the major and local directories, such as Yahoo, ODP or LookSmart
- All trade, business or industry related directories
- Suppliers, happy customers, relevant sister companies, and partners
- Great combinations (if you sell knifes and he sell forks, then link to each other)
- Related but not competing sites
- Competitors (some companies link to all major competitors)
The link popularity element measures the number of links between web pages on the Internet. There are two kinds of links to focus on: inbound links and outbound links - links to your website and links from your website.
You can look at inbound links as a form of voting. If site A has a link to site B, A is voting for B. The more votes B gets from other sites, the more link-popularity B gains.
Each website that votes (i.e. has outbound links) is a delegate as they themselves have a number of voters behind them - the websites that link to their website. The more voters they have behind them, the more weight their vote carries on other sites. So the most popular websites have the most link-popularity to pass on.
Some search engines can "understand" the theme of related web pages. For these search engines, inbound links from thematically related websites might get a better scoring.
Outbound links are a bit different. They work similar to references in a scientific report. If you have references to the right authorities in your profession, it shows that you know what you are talking about. You acknowledge the "masters" and thereby put yourself in a category above the ones that don't.
Not all links count the same! Links from recognized authorities in your industry count more than links from a small private website on a free host.
Do not use free submission software or services to submit to hundreds of thousands of free directories and search engines just to gain more inbound links. Links from most of these places won't do you any good and there is even a risk that some of the links you get this way will harm your rankings.
Do not participate in organized exchanging of unrelated links between websites (link farming), to boost you link-popularity factor. Most search engines consider that to be spam. Instead, focus on getting inbound links from relevant major players in your field - they are the only ones that really count.
Internal link popularity
In addition to inbound and outbound links, link structure on your own website has an important role in determining the value of each of your web pages.
The pages throughout your website that you most often link to will gain the most internal link-popularity. If one of your web pages has 500 internal pages pointing to it and another only has 10, the first web page is more likely to be valuable to users as there are more pages "voting" for that page.
Typically a website will have a navigation bar of some kind that points to the 5-8 most important sections of the website. This navigation bar will be on all web pages on the website and therefore boost internal link-popularity on those sections. Make sure to include links to the web pages you most want to rank well in your cross-site navigation bar and make sure that the pages the navigation bar link to have good targeted content. The pages you link to in the navigation bar will be easier to rank well in search engines.
Clicks are another way of measuring popularity. Of the millions of results that search engines serve, only a few of them get clicked. Sometimes people come back to the search result shortly after they click, and click another link or make a new search.
If analyzed, these clicks can tell the search engine what users find most important, and where they find the best answers to their searches. However, click-popularity is not widely used and not a dominating factor.
Also, click-popularity is one of the parameters of search engine rankings that you should not try to manipulate in any way! Don't try to boost your own click-popularity by doing hundreds of searches and then clicking on all your own links. It won't work. Search engines that apply this method are way too smart for that.
back to top
There are a number of things that can cause a search engine to be unable to read and index your website or specific web pages correctly. This section will provide a brief introduction to some of the most important things to watch out for.
If you want to learn more about how to solve each of the issues described below, visit Search Engine Watch.
Frames have always been a problem for search engines and some users to understand. Some studies show that users have a problem performing simple standard tasks that they would otherwise be able to solve with a similar website not using frames.
Jacob Nielsen is a well-known and respected usability guru that has some rather extreme views on the use of frames. Read his article "Why Frames Suck (Most of the time)" to understand the arguments against using frames.
When a website uses frames, users can rarely bookmark specific pages deep inside the website. If they use the browsers bookmark function and click it later to return they will end up on the front page - even if they did the book-marking from another web page deeper in the website.
If users can't bookmark pages within your site, neither can the search engines. If users can only bookmark your front page - as is the case with most websites using frames - that is also the only page most search engines will link to.
Some search engines might try to browse and index individual pages on your website outside the normal frame set. However, that will leave you with a whole new set of problems. If the search engines send visitors directly to content pages that are supposed to be viewed inside a frameset it is usually difficult, if not impossible, for users to navigate further into your website.
Some search engines read the content in the NOFRAMES section of a frameset. The NOFRAMES section is a special part of the HTML code that is presented to users with browsers that cannot read frames (e.g. search engines, very old browsers, and homemade viewers).
The code looks like this:
<noframes>Place content here</noframes>
Browsers and search engines that don't support frames can read the content you place between the two NOFRAMES-tags.
To use the space the best way you should …
Using the two tips above does not in any way guarantee that your web pages will be indexed and rank well. Many search engines will not read the content in the NOFRAMES-section.
- Fill the section with the same text that you have on the page visitors see. Just use the raw text without images and never use text not visible to the users.
- Remember to also add links in the NOFRAMES section for the search engines to follow. Use the same navigational links as you do on the page you show your visitors, but remember to keep them as normal text-links.
There are more advanced options available if you want a frames-based website indexed and ranked well. The easiest solution, however, is still to get rid of the frames.
If you want to use more advanced methods to make your frames-based website more search engine friendly, consult with an experienced search engine optimization expert or firm.
Two more ways of solving the problems of using frames are:
Information pages are static HTML pages (no frames) with information about key products or services provided on your website - one information page for each key topic. An information page is optimized with simple HTML, but otherwise looks and feels like the rest of the website. Creating such pages can in some cases produce search engine friendly entry points to your website.
- Information pages
- Inclusion programs
If you can make additional entry points, you can use Pay For Inclusion programs to have the pages indexed in major search engines. You can use the inclusion programs even if the website is framed - as long as you can provide a unique URL for each target or entry page.
Some search engines might compare the content you have in the NOFRAMES-section with what is presented to normal users. If the search engines do not find the same text in the NOFRAMES-section as they do on the user pages you might get penalized for spam.
Dynamic websites and web pages
Dynamic websites are very difficult for search engines to read because they use long and strange URLs.
A traditional static website is comprised of a number of individual files usually ending with .html - e.g. index.html, products.html etc. Each page is a unique file and usually has unique content.
A dynamic website very often only has one or a few files called "templates". The template dictates how to present the content but holds no content itself. All the content is stored in a database. To show a page on a dynamic website, you need to tell the template what to load from the database. Technically, that is done by adding "parameters" to the file name.
If the template is called "pages.asp" and you want to load the content with ID #54, the URL could look something like this:
That part is not so complicated. However, there is often more than one parameter attached to the URL as the database needs to have more information about sort order, navigation etc. This is when it gets complicated for the search engines.
There is simply no way for a search engine to be sure what parameters identifies a new page and what parameters are just a sort order of the content, a navigation setting, or something that does not justify indexing the new URL as a new unique web page.
There are many other problems related to having dynamic websites and websites built on content management systems indexed in search engines. It is unfortunately not possible to cover all of it in this tutorial.
The good news is that more and more search engines are now starting to include dynamically created pages in their results.
Flash and Macromedia Director
Search engines do not read Flash and Macromedia files. Content and links in any of these formats will not be accessible by search engines.
Search engines basically only read regular text on a web page. They don't read text and follow links within special formats such as Java Applets or any other formats that require the user to have a certain program installed.
You can read more about Flash, Macromedia and Java Applets in the Links and Navigation section.
Image maps or navigation buttons
Not all search engines read image maps. Most search engines read navigation buttons. If you use image maps you should add the same links as you have in the image maps as regular text-based links.
You can read more about image maps and navigation buttons in the Links and Navigation section.
IP delivery, agent delivery, cloaking, and personalization
It has become more and more common for websites to implement some degree of personalization, regional targeting, or other forms of individualization in the way web pages are served to individual users.
In its most simple form it can be a program on the server that checks what browser people are using, and serves up a specially tailored version for that browser. The same kind of program can also be used to check if people have specific plug-ins installed so that they get a version of the website they can read.
A more advanced use would be to check what country the user is coming from to serve a local version. Some portals and search engines have been doing that and many other cross-national websites do it. There are many legitimate reasons to do so including business strategies, marketing, and legal issues with products only allowed in certain countries.
The same techniques can also be used to serve different content to search engine spiders and normal human visitors. These techniques are usually referred to as "cloaking".
In most cases search engines do not like the use of cloaking. Thus, do not use cloaking unless you have a very good reason to do so.
A cookie is a small text file that a web server can save in a users' browser for future retrieval when that same user returns. It can be used to store login information or other preferences to make it easier or better for users to use a particular website.
Cookies are very safe to use in the sense that they cannot be read or shared across different users or websites. If a cookie is set on your browser, the website that wrote it is the only website that can read it. Also, other users of a website cannot get to the information in your cookie - it can't be transferred or shared.
Thus, even if the search engine spiders did accept cookies there would be no way for them to re-fetch it in the users' browser when they click a link in a search result. Consequently there is really no point in writing cookies for the search engines or sending cookies to the search engines.
You can turn off cookies in your own browser to test if your website can be accessed without them. Refer to your manual or help files of the browser you are using for instructions on how to turn off cookies. Usually these instructions will appear under "advanced options".
back to top
SUBMIT AND INDEX PAGES
Search engines do not always include all web pages from a web site. Usually they will only include a sample of your pages - the ones they determine to be most valuable.
Some of your web pages will be more important to have indexed than others. A product information page is far more important to have indexed than a contact form, as it is more likely someone will search for your products than your contact information.
Search engines do not always find the right pages to index by themselves. Sometimes they need a little help and guidance. In this section you will learn how to submit your web pages to search engines to get them indexed.
back to top
Submitting to search engines
Submitting a website to search engines is not always enough to be included or continue to be included. Most search engines analyze the link structures and click streams on the Internet to determine which web pages are most important for them to keep in their indexes.
If no other site is linking to your website it will be harder - in some cases, even impossible - to get indexed. You should therefore try to get as many of the biggest and most relevant websites to link to you.
How to submit to search engines
Most search engines have a form you must fill out to submit your website. You will usually find the link in the bottom of the front page labeled "Add URL".
Generally you should only submit your front page. The search engines will follow links from that page to the rest of your website. However, it is a good idea to submit all of the pages from your website. BuildTraffic.net makes this easy by spidering your domain and retrieving all our your pages for submission.
However, if you have important sections of your website that are not directly accessible through the regular navigation you can also submit them. If you have a site map (a page with links to all the web pages on your website) you can submit that too, to help the search engine spiders find all your content.
The easiest way to get your website into the search engines is by having it included in the major directories first, such as Yahoo, ODP, and Looksmart. Most search engines will consider a website that is included in the major directories to be of higher value and therefore put a lot more focus on getting such websites included.
Is your site indexed?
In most of the major search engines you can verify if your website or specific web pages have been indexed. This is not the same as your ranking, but serves as the basis for a user to find your site. If your website is not indexed it will never rank and never be found.
For example in Google just do a search for your domain name. If you appear in the results then you are listed.
Do not over-submit
You should never submit a web page to a search engine if it is already indexed. Unless you have made changes to the content of your website or have noticed that your site is no longer listed.
back to top
Excluding pages from getting indexed
If there are pages or directories on your website that you do not want search engines to index you can exclude them using one of two different methods:
- META-robots code
Robots.txt is a file that you place in the root of your web server. The file uses a simple syntax to exclude specific types of users - in this case search engine spiders - from parts of your website. You can either exclude specific search engine spiders or all spiders. To exclude all search engine spiders from all directories on your web server, you should write the following:
Note: This would disallow everything including your home page!
Learn more about how to write robots.txt files at SearchTools.com.
We recommend that you validate your robots.txt file before uploading it. There is no way to predict how a search engine will interpret a robots.txt file with errors. You can use the free validator at Search Engine World.
META-robots are small pieces of code you can place in the header of your HTML documents. You can use META-robots tags if you don't have access to your web server's root or if you want to exclude single pages on your website. You can read more about how to use the META-robots code at SearchTools.com.
back to top
MONITOR AND ANALYZE RESULTS
There are various ways you can track and monitor the traffic that comes to your website as a result of your search engine optimization. With some of the methods you will be able not only to see how much traffic you received, from what search engine and what keywords people are using to find you, but also how users behave after they land on your website and how many of them actually end up as customers.
All that information is available to you! Some of it is quite easy to get access to. As you scale up and want more information and better and faster reporting you will have to spend more resources, on computers, software programs and not the least - the time to handle it.
In this section we will give you a very brief introduction to each of the main ways you can monitor and analyze the traffic on your website and the results of your search engine optimization.
You will find a good and more in-dept tutorial about traffic analysis at DigitalEnterprise.org.
Collecting data, mining data and reporting results
There are three distinct parts to traffic analysis:
First, you need to collect the visitor data that you want to use as a basis for your mining and reporting. Each of the three methods described in the tutorial have their own advantages and disadvantages.
- Collection of data
- Mining/analysis of data
- Reporting results
Second, you need to filter the data - or data mine it - to extract the information you need and make the calculations you want for your reports.
Finally, you need to make some graphical reports of the data you have collected and filtered. This can be real-time or close to real-time reports through a web-browser or printed reports in Word or PDF format.
Sometimes traffic analysis packages come bundled with two or all three elements included. Trackers and network sniffing products usually include all three parts, and server log analysis tools include the last two.
Statistics are not absolute
There are no absolute numbers and standards on the Internet with regards to statistics and traffic analysis. Each method of collecting data multiplied by the endless number of ways to process and mine it makes it virtually impossible to compare the results of analysis across websites.
You should use traffic analysis to track and monitor trends over time for your own website. Traffic analysis can also be used to monitor real-time statistics to tune live ad campaigns.
Trackers - hit counters
Trackers, or hit counters, are essentially small pieces of code placed in a hidden part of your HTML code. Each time your page is loaded the script calls up another server whose database records that your site had a visitor. The tracking server can also record anonymous information about the visitor that requests the web pages at your server.
From a web interface you will be able to follow the traffic on your website in almost real time. The types of reports and features you get access to vary a lot from one tracker to another. With most of them you can see what search engines people come from, what keywords they used, and in some cases what search engine spiders have been visiting your website.
The advantages of trackers:
The disadvantages of trackers:
- Easy to get started
All you have to do is copy and paste a piece of code into your pages and you are ready.
- No computers or software necessary
You don't have to invest in either hardware or software - it is all on the computer that hosts the tracker.
- Count cached page views
Even if a visitor loads the web page from cache (proxy cache at the ISP or local cache in the browser), and not from your server, it will trigger the script and count as a view. All views therefore will be counted.
- No raw data
You get the reports you get - that's it. As you do not have the raw data that is used for the reporting, you cannot perform your own data mining or tweak any numbers. You will have to live with the reporting that comes with the package.
- Dependency of others
With a tracker you are dependant on another company's service. If they fail and loose the data, there is nothing you can do about it.
- The best trackers are expensive
If you run a large website with many visitors, the best trackers can become very expensive. In that case you can always consider other options.
You can sign up for a free tracker at ServuStats.com
Server Monitor Log Analysis
The Server Monitor is a small application that sits on the server and counts who is requesting what from the server. The Server Monitor stores the information in a log file - commonly known as a "server log file". The log file is a standard text file with one line of text for each action.
This log file can be analyzed using a number of dedicated applications. One of the most well known producers of such applications is Web Trends.
If you are running your own servers or have a professional hosting solution, you should be able to get access to the raw server monitor log files. In this case you can run your own reports using a log file analyzer. However, for large websites log files can become very large, taking up mega bytes of data every day.
Some service providers run server monitor log analysis off a central server and will provide you with access to online reports. They generally include good standard reports, but you will not be able to perform your own filtering.
- Standard format
Server monitor logs are in a standard format - at least, most servers write in a format that all analysis and reporting programs can read and understand. This, and the fact that there are many reporting tools available, makes it a very flexible solution. Using the same log files, you can easily switch from one reporting package to another.
- Cheap to get started (for a small website)
For a small website with a reasonable amount of daily log data this can be a cheap way to get started with traffic analysis.
- Impressive reporting
Even the small and inexpensive reporting tools have an impressive selection of printable reports with graphs, tables, and explanations.
- Huge log files on large websites
If you run a website with millions of page views a month you will have extremely large log files - several mega bytes of data a day. All that data has to be transferred, converted, filtered, data mined, and processed for reporting which will require lots of bandwidth and computer power to perform.
- One more program is one more problem
The server monitor application is one more program that has to run on the server. The more programs you run, the higher the risk of a crash. If you use trackers or network package sniffing you can turn off the server monitor application and save up to 20% of the resources on the server and limit the risk of the application crashing.
- Not real time reporting
Due to the huge amount of flat text file data that needs to be executed, real time reporting or even close-to real time reporting is often not an option. Usually you will have to accept reports aggregated every 24 hours or less.
- Not all visitor actions are available to the Server Monitor
Some actions, like the "stop request" (when a visitor hits the browsers STOP button before the web page has loaded), are not recorded in the server monitor log.
Network package sniffing
With this technique the raw data packages on your network are monitored as they pass by. There is no need to insert scripts in your web pages or have the server monitor application produce log files. It is all handled on a network level.
This gives you several advantages over all other monitoring methods. First, you get access to data that neither a server monitor or a tracker can pick up, such as "stop requests" (when a user hits the STOP button before the page is loaded). You will also be able to access extremely large amounts of visitor data much faster. In fact, with some network sniffing products you can get statistics in real time - for any size website.
Network sniffing in itself is just a way of collecting visitor data - like a server monitor producing a log. However, network sniffing products often come bundled with some kind of reporting software as well as modules to output the data as traditional server log files or export it directly to a database for further data mining and reporting.
- Easy to set-up and run
No scripts to implement or server logs to maintain.
- Free up resources on your web server
When you use network package sniffing you can turn off your server monitor application. This will free up 15-20% of the resources on your servers - that's how much power it usually takes to write the logs.
- Access to all data
Network package sniffing is the only method that provides access to all the visitor data.
- True real time data access
Network sniffing is the only way to get real time data access and reporting features. When a user sends a request to your server, you actually monitor the data package before it hits your server so you can react on it before your server answers the user.
- Can be a problem to use on secure servers
If most of your website is on secure web servers, this may not be the right solution for you. Talk with the supplier of the network sniffing solutions you are interested in to find out more about how they handle secure servers.
- Can be expensive
For a small site network sniffing can be expensive compared to other methods. For large sites it can be cheaper.
back to top
Don't get obsessed with rankings! Yes, it is important to have good rankings for your primary keywords but there are better ways to monitor the results of your search engine optimization campaign than ranking reports.
One of the best ways is through traffic analysis as described in the last section. With the right reporting tools, you will be able to see what search engines visitors came from and what keywords they used to find you. That is a lot more useful to determine the true value of your work.
If and when you do check your rankings, you should not check thousands of keywords for your website 10 times a day. This places an unreasonable load on the search engines servers, which the search engine companies do not like.
Within the services you get using BuildTraffic.net, you can see the average ranking on the keywords that brought you visitors.
Rankings change all the time
Rankings change all the time. Sometimes you don't get the same results from a search engine if you test it at two different locations at the very same time.
If you want to make ranking reports using automated tools or services you should be aware that they only reflect the situation the second the report was made and from the location it was connected. From another location at another time, the results might look very different.
back to top