Search engine optimization (SEO) is a set of methods aimed at improving the ranking of a website in search engine listings, and could be considered a subset of search engine marketing. The term SEO also refers to "search engine optimizers," an industry of consultants who carry out optimization projects on behalf of clients' sites. Some commentators, and even some SEOs, break down methods used by practitioners into categories such as "white hat SEO" (methods generally approved by search engines, such as building content and improving site quality), or "black hat SEO" (tricks such as cloaking and spamdexing). White hatters say that black hat methods are an attempt to manipulate search rankings unfairly. Black hatters counter that all SEO is an attempt to manipulate rankings, and that the particular methods one uses to rank well are irrelevant.
Search engines display different kinds of listings in the search engine results pages (SERPs), including: pay per click advertisements, paid inclusion listings, and organic search results. SEO is primarily concerned with advancing the goals of a website by improving the number and position of its organic search results for a wide variety of relevant keywords. SEO strategies may increase both the number and quality of visitors. Search engine optimization is sometimes offered as a stand-alone service, or as a part of a larger marketing effort, and can often be very effective when incorporated into the initial development and design of a site.
For competitive, high-volume search terms, the cost of pay per click advertising can be substantial. Ranking well in the organic search results can provide the same targeted traffic at a potentially significant savings. Site owners may choose to optimize their sites for organic search, if the cost of optimization is less than the cost of advertising.
Not all sites have identical goals for search optimization. Some sites seek any and all traffic, and may be optimized to rank highly for common search phrases. A broad search optimization strategy can work for a site that has broad interest, such as a periodical, a directory, or site that displays advertising with a CPM revenue model. In contrast, many businesses try to optimize their sites for large numbers of highly specific keywords that indicate readiness to buy. Overly broad search optimization can hinder marketing strategy by generating a large volume of low-quality inquiries that cost money to handle, yet result in little business. Focusing on desirable traffic generates better quality sales leads, resulting in more sales. Search engine optimization can be very effective when used as part of a smart niche marketing strategy.
Early search engines
Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all a webmaster needed to do was submit a site to the various engines which would run spiders, programs to "crawl" the site, and store the collected data. The default search-bracket was to scan an entire webpage for so-called related search words, so a page with many different words matched more searches, and a webpage containing a dictionary-type listing would match almost all searches, limited only by unique names. The search engines then sorted the information by topic, and served results based on pages they had crawled. As the number of documents online kept growing, and more webmasters realized the value of organic search listings, some popular search engines began to sort their listings so they could display the most relevant pages first. This was the start of a friction between search engine and webmasters that continues to this day.
At first search engines were guided by the webmasters themselves. Early versions of search algorithms relied on webmaster-provided information such as category and keyword meta tags, or index files in engines like ALIWEB. Meta-tags provided a guide to each page's content. When some webmasters began to abuse meta tags, causing their pages to rank for irrelevant searches, search engines abandoned their consideration of meta tags and instead developed more complex ranking algorithms, taking into account factors that elevated a limited number of words (anti-dictionary) and were more diverse, including:
- Text within the title tag
- Domain name
- URL directories and file names
- HTML tags: headings, emphasized and strongly emphasized text
- Term frequency, both in the document and globally, often misunderstood and mistakenly referred to as Keyword density
- Keyword proximity
- Keyword adjacency
- Keyword sequence
- Alt attributes for images
- Text within NOFRAMES tags
- Content development
Pringle, et al. (Pringle et al., 1998), also defined a number of attributes within the HTML source of a page which were often manipulated by web content providers attempting to rank well in search engines. But by relying so extensively on factors that were still within the webmasters' exclusive control, search engines continued to suffer from abuse and ranking manipulation. In order to provide better results to their users, search engines had to adapt to ensure their SERPs showed the most relevant search results, rather than useless pages stuffed with numerous keywords by unscrupulous webmasters using a bait-and-switch lure to display unrelated webpages. This led to the rise of a new kind of search engine.
Development of more sophisticated ranking algorithms
Google was started by two PhD students at Stanford University, Sergey Brin and Larry Page, and brought a new concept to evaluating web pages. This concept, called PageRank, has been important to the Google algorithm from the start. PageRank relies heavily on incoming links and uses the logic that each link to a page is a vote for that page's value. The more incoming links a page had the more "worthy" it is. The value of each incoming link itself varies directly based on the PageRank of the page it comes from and inversely on the number of outgoing links on that page.
With help from PageRank technology, Google proved to be very good at serving relevant search results. Google quickly became the most popular and successful search engine. Because PageRank measured an off-site factor, Google felt it would be more difficult to manipulate than on-page factors.
However, webmasters had already developed link building tools and schemes to influence the Inktomi search engine. These methods proved to be equally applicable to Google's algorithm. Many sites focused on exchanging, buying, and selling links on a massive scale. PageRank's reliance on the link as a vote of confidence in a page's value was undermined as many webmasters sought to garner links purely to influence Google into sending them more traffic, irrespective of whether the link was useful to human site visitors.
Further complicating the situation, the default search-bracket was still to scan an entire webpage for so-called related search-words, and a webpage containing a dictionary-type listing would still match almost all searches (except special names) at an even higher priority given by link-rank. Dictionary pages and link schemes could severely skew search results.
It was time for Google — and other search engines — to look at a wider range of off-site factors. There were other reasons to develop more intelligent algorithms. The Internet was reaching a vast population of non-technical users who were often unable to use advanced querying techniques to reach the information they were seeking and the sheer volume and complexity of the indexed data was vastly different from that of the early days. Search engines had to develop predictive, semantic, linguistic and heuristic algorithms. Around the same time as the work that led to Google, IBM had begun work on the Clever Project, and Jon Kleinberg was developing the HITS algorithm.
A proxy for the PageRank metric is still displayed in the Google Toolbar, though the displayed value is rounded to be an integer, and the data updated infrequently, so it is likely to be outdated. For these reasons, and the fact that PageRank is only one of more than 100 "signals" that Google considers in ranking pages, experienced SEOs recommend ignoring the displayed PageRank.
Today, most search engines keep their methods and ranking algorithms secret, to compete for finding the most valuable search-results and to deter spam pages from clogging those results. A search engine may use hundreds of factors in ranking the listings on its SERPs; the factors themselves and the weight each carries may change continually. Algorithms can differ widely: a webpage that ranks #1 in a particular search engine could rank #200 in another search engine.
Google, Yahoo and Microsoft do not disclose the algorithms they use to rank pages. Some SEOs have carried out controlled experiments to gauge the effects of different approaches to search optimization. Based on these experiments, often shared through online forums and blogs, professional SEOs form a consensus on what methods work best.
SEOs widely agree that the top signals that influence a page's rankings include:
- Keywords in the title tag.
- Keywords in links pointing to the page.
- Keywords appearing in visible text.
- Link popularity (PageRank for Google) of the page.
In addition, there are many other signals that can affect a page's ranking.
The relationship between SEO and the search engines
The first mentions of Search Engine Optimization don't appear on Usenet until 1997, a few years after the launch of the first Internet search engines. The operators of search engines recognized quickly that some people from the webmaster community were making efforts to rank well in their search engines, and even manipulating the page rankings in search results. In some early search engines, such as Infoseek, ranking first was as easy as grabbing the source code of the top-ranked page, placing it on your website, and submitting a URL to instantly index and rank that page.
Due to the high value and targeting of search results, there is potential for an adversarial relationship between search engines and SEOs. In 2005, an annual conference named AirWeb was created to discuss bridging the gap and minimizing the sometimes damaging effects of aggressive web content providers.
Some more aggressive site owners and SEOs generate automated sites or employ techniques that eventually get domains banned from the search engines. Many search engine optimization companies, which sell services, employ long-term, low-risk strategies, and most SEO firms that do employ high-risk strategies do so on their own affiliate, lead-generation, or content sites, instead of risking client websites.
Some SEO companies employ aggressive techniques that get their client websites banned from the search results. The Wall Street Journal profiled a company that allegedly used high-risk techniques and failed to disclose those risks to its clients. Wired reported the same company sued a blogger for mentioning that they were banned. Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.
Google has enforced webpage restrictions for years, such as for hidden-text (background and foreground colors the same hue); in 2006, Google could punish a non-standard website by blocking search-results, automatically, the next day for 30-35 days (or longer), pending a reinclusion request, and if reinstated, revert the index to old/expired/deleted webpages from a year earlier, delaying the re-indexing of the current website for a total of 2-4 months.
Yahoo! and MSN Search do not automatically punish entire websites for small amounts of hidden text.[citation needed] Google's market share of daily searches has fallen rapidly from 75% to 56% over the past few years, as other search engines find many webpages that Google has banned and cannot display due to Google's severely limited index.[citation needed] In early 2006, MSN Search typically re-indexed small websites every 14 days, and Yahoo! also re-indexed quickly, much faster than Google, but all three MSN/Yahoo!/Google could require more than a month to index a new page (new file name) on an old website.
Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. All of the main search engines provide information/guidelines to help with site optimization: Google's, Yahoo!'s, MSN's and Ask.com's. Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Yahoo! has SiteExplorer that provides a way to submit your URLs for free (like MSN/Google), determine how many pages are in the Yahoo! index and drill down on inlinks to deep pages. Yahoo! has an Ambassador Program and Google has a program for qualifying Google Advertising Professionals.
Getting into search engines' listings
New sites do not need to be "submitted" to search engines to be listed. However, Google and Yahoo offer a submission program such as Google Sitemaps that an XML type feed could be created and submitted. Generally however, a simple link from an established site will get the search engines to visit the new site and begin to spider its contents. It can take a few days or even weeks from the acquisition of a link from such an established site for all the main search engine spiders to commence visiting and indexing the new site.
Once the search engine has found the new site, it will generally visit and start to index the pages on the site, as long as all the pages are linked to with anchor tag hyperlinks. Pages which are accessible only through Flash or Javascript links may not be findable by the spiders.
Search engine crawlers may look at a number of different factors when crawling a site, and many pages from a site may not be indexed by the search engines until they gain more pagerank or links or traffic. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled, as well as other importance metrics. Cho et al. (Cho et al., 1998) described some standards for those decisons as to which pages are visited and sent by a crawler to be included in a search engine's index.
Webmasters can instruct spiders to not index certain files or directories through the standard robots.txt file in the root directory of the domain. Standard practice requires a search engine to check this file upon visiting the domain, though a search engine crawler will keep a cached copy of this file as it visits the pages of a site, and may not update that copy as quickly as a webmaster does. The web developer can use this feature to prevent pages such as shopping carts or other dynamic, user-specific content from appearing in search engine results, as well as keeping spiders from endless loops and other spider traps.
For those search engines who have their own paid submission (like Yahoo!), it may save some time to pay a nominal fee for submission. Yahoo!'s paid submission program guarantees inclusion in their search results, but does not guarantee specific ranking within the search results.
White hat methods
White hat methods of SEO involve following the search engines' guidelines as to what is and what isn't acceptable. Their advice generally is to create content for the user, not for the search engines; to make that content easily accessible to their spiders; and to not try to game the system. Often, webmasters make critical mistakes when designing or setting up their websites, inadvertently "poisoning" them so that they will not rank well. White hat SEOs attempt to discover and correct mistakes, such as machine-unreadable menus, broken links, temporary redirects, or a poor navigation structure.
Because search engines are text-centric, many of the same methods that are useful for web accessibility are also advantageous for SEO. A detailed case for this common ground, cited by the W3C with respect to Developing a Web Accessibility Business Case, is SEO A Positive Influence on Web Accessibility. Google has brought the relationship between SEO and accessibility even closer with the release of Google Accessible Web Search which prioritises accessible websites.
Methods are available for optimizing graphical content, including ALT attributes, and adding a text caption. Even Flash animations can be optimized by designing the page to include alternative content in case the visitor cannot read Flash.
Some methods considered proper by the search engines:
- Using unique and relevant title to name each page.
- Editing web pages to replace vague wording with specific terminology relevant to the subject of the page, and that the audiences that the site was developed for will expect to see on the pages, and will search with to find the page.
- Increasing the amount of unique content on the site.
- Writing quality content for the website visitors instead of the search engines.
- Using a reasonably-sized, accurate description meta tag without excessive use of keywords, exclamation marks or off topic terms.
- Ensuring that all pages are accessible via anchor tag hyperlinks, and not only via - Java, Javascript or Macromedia Flash applications or meta refresh redirection; this can be done through the use of text-based links in site navigation and also via a page listing all the contents of the site (a site map).
- Allowing search engine spiders to crawl pages without having to accept session IDs or cookies.
- Developing "link bait" strategies. High quality websites that offer interesting content or novel features tend to accumulate large numbers of backlinks.
- Participating in a web ring with other quality websites.
- Writing useful, informational articles under a Creative Commons or other open source license, in exchange for attribution to the author by hyperlink.
Black hat methods
"Black hat" SEO are methods to try to improve rankings which are disapproved of by the search engines, typically because they consider such methods deceptive, and unrelated to providing quality content to site visitors. Search engines often penalize sites they discover using black hat methods, by reducing their rankings or eliminating their listings from the SERPs altogether. Such penalties are usually applied automatically by the search engines' algorithms, because the Internet is too large to make manual policing of websites feasible.
Spamdexing is the promotion of irrelevant, chiefly commercial, pages through deceptive techniques and the abuse of the search algorithms. Over time a widespread consensus has developed in the industry as to what are and are not acceptable means of boosting one's search engine placement and resultant traffic.
Spamdexing often gets confused with white hat search engine optimization techniques, which do not involve deceit. Spamming involves getting websites more exposure than they deserve for their keywords, leading to unsatisfactory search results. Optimization involves getting websites the rank they deserve on the most targeted keywords, leading to satisfactory search experiences.
When discovered, search engines may take action against those found to be using unethical SEO methods. In February 2006, Google removed both BMW Germany and Ricoh Germany for use of these practices.
Cloaking is the practice of serving one version of a page to search engine spiders/bots and another version to human visitors.
Legal issues
In 2002, SearchKing filed suit in an Oklahoma court against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted an unfair business practice. This may be compared to lawsuits which email spammers have filed against spam-fighters, as in various cases against MAPS and other DNSBLs. In January of 2003, the court pronounced a summary judgment in Google's favor.