Archive forFebruary, 2009

Getting Related Terms from Google

One of the most important activities to undertake when starting to think about SEO or PPC is keyword research. One important step to do is coming up with a list of related keywords. If you start a site about mortgages, what other related keywords are there? Wouldn’t it be nice to know what keywords Google’s algorithm’s thought were related? I went to a Google AdWords Seminar last week (taught by Brad Geddes from bgTheory) where I learned a cool new technique for this.

Do a search for anything related to your keyword in Google (using the ~ operator) while also using the keyword as a negative term in the search (using the – operator). Then look for any bolded words in the results. These bolded words are what Google considers related. To get even more words, run the search again using all of the words you found as negative keywords.

So for example, if you want to find any words related to “mortgage” use the following search:

~mortgage -mortgage

When you run this you will notice that the words “finance”, “refinance”, “lending” and “bank” are are bolded. So Google considers these words related to “mortgage”. Now run the search again using the new words as negative keywords, like this:

~mortgage -mortgage -finance -refinance -lending -bank

This results in some new related words: “financial”, “interest” and “corp”. You can keep adding these to the negative keyword list until Google stops finding words.

You now have a new list of words that you might consider as keywords in a PPC campaign or for using as keywords in SEO.

Comments off

A New Way to Deal with Duplicate Content

One of the biggest worries of a lot of webmasters is duplicate content issues. On-site duplicate content issues occur when there are two URLs on a site that show the same (or very similar) content. This always brings up a lot of questions for webmasters: How will the search engines know which page on my site is the “right” version; Will I get penalized for having two pages with the same content?

Today Google, Yahoo and Microsoft announced a solution to this problem: a new way to use the HTML link element. The link element is used to specify a relationship between two pages. It can be used to specify things like a stylesheet or rss feed for a page.

Now there is a new standard “canonical” value for the rel attribute of the tag. The HTML will look something like this:

<a rel=”canonical” href=”http://www.example.com/products/cameras”/>

Place this tag in the head section of the HTML of any of the pages that have duplicate content and your done.

For example, all of these following pages might have the same content:

http://www.example.com/products/cameras

http://www.example.com/products/cameras?sort=price

http://www.example.com/products/cameras/all

If all three of these pages have the same content then the same canonical link tag should be on each of them.

This should be a good way to ease webmasters worry about duplicate content issues. You can get more information from Yahoo’s announcement,d Google’s announcement and Microsoft’s announcement.

Comments off