With hundreds of millions of users entering hundreds of billions of queries into major search engines such as Google, Yahoo and Bing, the importance of designing a site to be search-engine friendly cannot be underestimated. However, even the most experienced designers and developers can make simple mistakes that can cost their site thousands of views and (potentially) uncounted revenues.
Sometimes, the “old-fashioned” methods are still the most effective. While many SEO experts have debated the degree of effectiveness of old-school metatags and how they affect search engine rankings, the common belief is that their presence is helpful, especially since spiders often crawl through much of the text content of each page they index.
The first place most spiders crawl through is the page’s title tag. When a search engine finds a word in a title tag that matches the search terms, the results page displays that title in bold text. The bold text catches the reader’s eye and causes a nearly instinctive compulsion to click on that link. As the specificity of the search terms increase, so does the importance of the title tag.
The next area of the page that the spiders will visit is the content within the “description” metatag. Unfortunately, many so-called “black hat” SEO specialists take advantage of this situation by spamming the title and description tags with keywords that are either tangentially related to the page’s content, or completely irrelevant but filled with “hot” keywords in order to boost traffic. Description content should be relevant, clear and precise.
Headlines and Subheaders
In HTML4, the header (<h1>,<h2>, etc.) tags specify the headlines and subheaders within content. Not only do search engine crawlers view the content within these tags as important, the tags also help the viewer find the the information they need without scrolling through the entire article. Wikipedia uses this technique to great effect when breaking down its lengthy and specific content.
While content is an important component of search engine optimization, the other key part lies in the design and architecture of the site itself. Spiders must be able to navigate through a site and index all its relevant pages quickly and efficiently. Also, the URLs that link pages together within the site can be used as search engine material.
Since search engines view pages in much the same way as the least sophisticated browser, an overly complicated navigation scheme (or one that his highly dependent on user input) will hamper the spider’s ability to index pages. Most site architects believe that a site’s navigation structure should resemble that of a cake: two layers is good, three is better, four or more is too much.
Search engines also place a high priority on how fast a page loads. While Flash-heavy pages can produce stunning animations and eye-catching designs, they also slow down a page’s load speed. Search engines, like users, won’t wait forever for a beautiful presentation: they both want content and they want it now (if not sooner).
Which link tells you more about a product?
Most developers understand how to assign a URL to a dynamic page. These links will enable the spiders to collect content and index the page much more quickly.
In a future article, we will address how the new tags in HTML5 will help with search engine optimization.