WebProNews

Tag: SEO

  • Google’s Matt Cutts Talks About “Unique” Content

    Google has put out a new Webmaster Help video. This time, Matt Cutts answers a question about an ecommerce site that has about 1,000 product pages. The question, as Google translates it, essentially boils down to: How can I make the pages on my site unique?

    “Okay, let me give you a little bit of tough love…the question should not be, ‘I have n pages. How can I make them unique?’ The question is, ‘How many pages can I make that are high quality that provide value to users?’” Cutt says. “If you can’t manage to have a thousand pages, and have something unique – something different than just an affiliate feed or whatever on each page of that site, then why should your thousand pages, which are, again, maybe just rewarmed content of an affiliate feed, rank compared to someone else’s thousand pages of that same affiliate content?”

    Watching the video, something tells me Matt’s a little tired of answering questions like this.

  • Matt Cutts Talks About Fighting Webspam Around The World

    Google put out a new Webmaster Help video today. This time Matt Cutts talks about the company’s efforts to fight webspam on a global scale, as opposed to just in the U.S. and in English.

    The video was a response to the user-submitted question:

    Europe is small compared with USA, so will Google get a webspam team for smaller markets?

    “It turns out we actually do have a webspam team based in Europe (in Dublin, in fact), and they’re able to handle webspam and tackle spam reports in a wide variety of languages, so on the order o well over a dozen – dozens of languages, because there’s a lot of smart people there,” says Cutts. “So we actually have people on the ground in a lot of different offices around the world, and we also have engineers in Zurich. We have an engineer in Hong Kong, but there’s a lot of people who have native experience…people who think about spam in Russia, but also a lot of people in Dublin, who have done a fantastic job dealing with, you know, if an algorithm misses something, they’re there to find the spam. They know the lay of the land. They know who the big players are, and they’re really quite expert.”

    “But if there’s some kind of really unique link spam going on in Poland, for example, there’s a person there, and those people are on top of that situation,” he adds. “So, I think it’s important that Google not be just a U.S.-centric or an English-centric company. We want to be international. We want to deal with all different languages, and it is the case that we might not have webpam full-time on every single language, but you would be pretty shocked at the number of languages that the webspam team collectively is able to fight spam in.”

    Webspam is always a big issue for Google, but it’s been a particularly big issue in the search industry this year, thanks to Google’s launch of the Penguin update, designed to algorithmically tackle sites violating Google’s quality guidelines.

    In another video from Google this week, Cutts said that about 90% of the messages Google sends out to webmasters are about black hat webspam.

    More recent Webmaster Help videos from Matt Cutts here.

  • Matt Cutts: Only 3% Of Those 700K Messages Were About Unnatural Links

    Matt Cutts: Only 3% Of Those 700K Messages Were About Unnatural Links

    Google has put out a new Webmaster Help video with Matt Cutts doing what he does best – busting myths about Google. Google sent out a lot of messages to webmasters this year, and a lot of them were about unnatural links. However, not as many of them as you might think were actually about that.

    “Today, I wanted to debunk a particular message that I’ve heard around on some black hat forums, where they say, ‘Google sent out over 700,000 unnatural link warnings,’ you know, earlier this year,” says Cutts. “That’s not the case, and I wanted to give you a lot more context and explanation.”

    “Tiffany, a member of the webspam team, was at a search conference, and she showed a graph and basically explained that we had sent out over 700,000 messages in January and February,” he explains. “A lot of people misinterpreted that and said, ‘Oh, these are all about unnatural links,’ because we were taking action on link networks at the time, so everybody assumed that those messages were all about unnatural links. That’s not true. It turns out, roughly 90% of the messages we send out are about black hat, and we had just started adding that functionality, which was why it had grown a lot, and we were deciding to remark on it. So, out of the 700,000 messages that we sent in January and February of this year (2012), over 600,000 of them were about black hat. That’s like pure spam. You know..anybody can look at a site and tell, this is clear cut egregious…nobody wants to see this stuff.”

    “It turns out, only about 3% of the messages that we were sending out were related to unnatural links,” he says. “So under 25,000 messages of the 700,000 messages we sent out were actually about unnatural links.”

    Cutts says 25,000 is a relatively small set of people. It still seems like a lot to me.

  • There Was A Google Panda Update Data Refresh Nine Days Ago

    Remember about a week and a half ago when there was supposed to be a Panda refresh coming within the following week or so? It turns out that it happened the next day, so if you’re still waiting for it, you probably got by unscathed.

    Search Engine Land is reporting that it has confirmed with Google that the refresh occurred on November 21, and affected 0.8% of English queries “to a degree that a regular user might notice”.

    That would be the second known data refresh of the Panda update in November, with one having taken place around November 5.

    We’re now at the end of November, heading into December, which means there are now two full months for which Google hasn’t released its lists of “search quality highlights” and algorithm changes. Lately, they’ve been doing them two months at a time, so we may be able to expect the latest lists very soon. Then, we can look at the other types of changes Google has been focused on.

    More on Panda here.

    Image: Panda Cam (San Diego Zoo)

  • The Best Drunken SEO Pitch You’ll Hear Today

    The Best Drunken SEO Pitch You’ll Hear Today

    McCollum & Griggs LLC, a Kansas City law firm has put out a humorous clip featuring a voicemail they claim comes from West Coast SEO company. The guy, who the firm says is drunk (which does appear to be quite possible, based on the audio) claims to be from Microsoft.

    “We’re pretty sure this guy is not from Microsoft,” Phil Singleton, CEO of Kanasas City Web Design, which handles the firm’s site, tells WebProNews.

    “An SEO & Internet marketing company on the West Coast tried to sell a domain name related to personal injury law to Kansas City attorneys, McCollum & Griggs, LLC,” David McCollum from the firm explains in the video description. “After a couple days of civil discussions with the domain seller, the lawyers politely passed on the opportunity after discussing with their Kansas City SEO firm. The next day, the supervisor (or owner) called back and left a drunken rant on the attorneys’ voicemail.”

  • Matt Cutts On How Google Handles Keyword Synonyms

    Google’s handling of synonyms when it comes to searcher behavior and the results it displays has been a topic we’ve visited several times in recent memory. We noticed in one of Google’s lists of “search quality highlights” that a number of changes Google has made have been related to synonyms. Clearly, Google has been working on getting better in this department.

    Interestingly, the topic came up in the latest Google Webmaster Help video. Matt Cutts responds to the question:

    If two terms are used essentially interchangeably, does Google realize that the terms are interchangeable? Should you be trying to use both terms, or just focus on one term to get the best search engine traffic? An example is EMR and EHR.

    “My advice would be, all of the things being equal, as long as you can do it without sounding artificial or stilted or spammy, is to go ahead and use both words,” Cutts says. “We have an entire team at Google called the synonyms team, and their job is to sort of realize that car and automobile are the same thing, but I wouldn’t claim that they’re perfect all the time, and so rather than relying on the search engine to really be able to intuit that you’re not only about electronic health records, but also about electronic medical records, my advice would be to make sure that you mention, in a natural way, that you are good at both of those.”

    He continues, “A good way to do that is to have some paragraphs of text or a background about what you do, and just make sure that when you’re talking about what it is – maybe once it’s a USB drive, and the next time it’s a USB stick, and at the bottom of the page it’s a flash drive or whatever, but just read that tex aloud, and maybe even ask a friend to read it and say, ‘Does it sound stilted? Does it sound artificial?’ And if you’re trying to incorporate really a lot of keywords, then you’ll notice that your text does become stilted, artificial, or maybe even spammy.”

    “But, in general, if you are able to use synonyms or the words that users wold actually type in a natural way, then you reduce or remove that uncertainty, and Google doesn’t have to somehow guess or estimate that that’s what your page is really about, so that can be kind of helpful, and I would recommend that you try to use the words in a natural way as long as it doesn’t go too far, and people start to notice that it sounds weird,” he concludes.

    It sounds like, depending on what you’re trying to rank for, there may be a fine line for what Google will actually like in terms of using various synonyms on a page. Read our discussion on this topic with former Googler Vanessa Fox.

  • Do Your Blog Comments Have Search Ranking Value?

    When Google unleashed the Panda update, it waged war on “thin” content in its search results. Google wants to provide pages that offer information valuable to searchers, as opposed to content that was hastily thrown together.

    It’s easy to hear “thin” content, and associate that with content in which there is not a lot of actual content. In other words, you might take this to mean that Google does not like short articles, and would favor a longer article in a case where these two pieces of content are competing for rankings.

    Have you seen search ranking success with short content? Let us know in the comments.

    The fact is, Google may very well favor the longer, more in depth piece, but that does not mean Google will not value a short article.

    In a Google forum thread, a webmaster asked the question: Is short content = thin content?” As Barry Schwartz at Search Engine Roundtable points out, Google Webmaster Trends Analyst John Mueller, weighed in on the discussion. Here’s what he said:

    “Rest assured, Googlebot doesn’t just count words on a page or in an article, even short articles can be very useful & compelling to users. For example, we also crawl and index tweets, which are at most 140 characters long. That said, if you have users who love your site and engage with it regularly, allowing them to share comments on your articles is also a great way to bring additional information onto the page. Sometimes a short article can trigger a longer discussion — and sometimes users are looking for discussions like that in search. That said, one recommendation that I’d like to add is to make sure that your content is really unique (not just rewritten, autogenerated, etc) and of high-quality.” Emphasis added.

    Last year, Google shared a set of questions that one could ask himself when assessing the quality of a page or an article. One of these was: “Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?”

    Shallow does not mean short. The beginning part of that, which talks about experts and enthusiasts, is likely to have a stronger bearing on how Google views the content. Who you are matters to Google. That’s why they’re looking to push authorship as a stronger signal in the future. Length of a specific piece of content is not necessarily as much of a factor.

    Still, that doesn’t mean it’s not a factor. If one piece of content is simply more informative, which it may very well be if it is longer, it might still be the better result, regardless of who you are. There’s still something to be said for a well researched, insightful article. Google is not looking to ignore this kind of content, by any means.

    Another of Google’s questions is: “Does the article describe both sides of a story?” Sometimes, it may take more text to answer that with a yes.

    One thing about Mueller’s comments that stikes me as interesting is the part about comments. In an article a while back, we looked at the SEO value of comments. Blogger Michael Gray, who turned off his comments several years ago, told us, “It was one of the best decisions I made, and regret not doing it sooner.”

    “Does Google take a look at factors like time on site and bounce rate?” he said at the time. “IMHO yes, but you should be looking to increase those with good information, and solid actionable content, not comments. The biggest effect comments have is giving Google a date to show in the SERP’s. This is a huge factor who’s importance can’t be unstated. If I’m looking for how to fix the mouse on my computer, or what dress Angelina Jolie wore to an awards show, having the date show up in the SERP has a lot of value for the user. If I’m looking to learn how to structure a website, the date plays almost no role. The author’s expertise and understanding of information architecture trumps the date.”

    It should be noted that Google’s Matt Cutts has reportedly said since then that Google doesn’t use bounce rate.

    Interestingly, according to Shoemoney blogger Jeremy Schoemaker, who we also spoke with for that particular article, a Google engineer said at the time that, if anything, comments were diluting the quality score of a page, by possibly diluting overall keyword density. There is also the possibility that the few comments that go through that are clearly spam, could send poor quality signals to Google.

    “So he said he did not see a positive to leaving indexable comments on my site,” Schoemaker told us at the time.

    But now, here we have Mueller talking up the value of comments.

    Of course, it’s not as if this is the first time that Google has sent mixed signals to webmasters and content creators. But on the other hand, you can’t really hold every person at Google, speaking candidly, accountable for knowledge about every aspect of how Google works, especially when it comes to the search algorithm – Google’s secret recipe.

    It stands to reason that Google would look at comments in similar fashion to how it views the rest of the content on the page. Some comments are obviously of higher quality than others, even if the spammy ones have been cut out. But if quality is there, Google may just see how such comments could be valuable to users.

    Perhaps webmasters should be more stingy with the comments they allow, but then you’re talking about censorship, which is not necessarily a path you want to travel.

    Do you think comments on your blog have helped or hurt you in search? Do you believe they’ve had any effect at all? Should Google take them into consideration? Tell us what you think.

  • Google Explains Basics Of Paid Links In New Video

    Google put out a new Webmaster Help video today about paid links. There’s no new information, and most of our readers can probably skip this one, but essentially, Cutts is just explaining to the uninitiated the difference between a paid link that passes PageRank and an advertisement link, which does not.

    Clearly, this is still something that comes up with people new to the game. This was, after all, a direct response to a user-submitted question.

    This is still an important part of online marketing that any webmasmter needs to know, so it’s probably a good thing to keep it in the conversation. If you fall into the camp that is still learning the basics, and you want to know more about Google’s guidelines, and paid link policies specifically, start here.

  • Trouble With Googlebot Fetching Robots.txt? Use Fetch As Googlebot Tool

    Google has released a new Webmaster Help video in response to a question from a user who has been having trouble getting Google to fetch their robots.txt file. Here’s what the user said:

    “I’m getting errors from Google Webmaster Tools about the Googlebot crawler being unable to fetch my robots.txt 50% of the time (but I can fetch it with 100% success rate from various other hosts). (On a plain old nginx server and an mit.edu host.)”

    Google’s Matt Cutts begins by indicating that he’s not saying this is the case here, but…

    “Some people try to cloak, and they end up making a mistake, and they end up reverse-cloaking. So when a regular browser visits, they server the content, and when Google comes and visits, they will serve empty or completely zero length content. So every so often, we see that – where in trying to cloak, people actually make a mistake and shoot themselves in the foot, and don’t show any content at all to Google.”

    “But, one thing that you might not know, and most people don’t know (we just confirmed it ourselves), is you can use the free fetch as googlebot feature in Google Webmaster Tools on robots.txt,” he adds. “So, if you’re having failures 50% of the time, then give that a try, and see whether you can fetch it. Maybe you’re load balancing between two servers, and one server has some strange configuration, for example.”

    Something to think about if this is happening to you (and hopefully you’re not really trying to cloak). More on Fetch as Google here.

  • Matt Cutts Says His Blog Has Helped Him Get Into The Webmaster Mindset

    This week, we’ve already seen Google release a Webmaster Help video with Matt Cutts talking about SEO, and the prospect of having it renamed something else to shed negative connotations. Today, Google has put out another video of Cutts talking about SEO.

    This time, Cutts is responding to the following question:

    “Have you learned something about SEO that you wouldn’t know if you haven’t had your blog?”

    He says that something he didn’t expect from having a blog was that it helped him step into the mindset of a webmaster or a site owner “a lot better”.

  • Matt Cutts On Whether Or Not SEO Should Be Called Something Else

    Google put out a new Webmaster Help video today. This time, Matt talks about whether or not “search engine optimization” should be renamed.

    “A lot of the times when you hear SEO, people get this very narrow blinder on, and they start thinking link building, and I think that limits the field and limits your imagination a little bit,” says Cutts. “It’s almost like anything you’re doing is making a great site – making sure it is accessible and crawlable, and then, almost marketing it – letting the world know about it.”

    “So it’s a shame that search engine marketing historically refers to paid things like AdWords because otherwise, I think that would be a great way to view it,” he says. “You could also think about not search engine optimization, but search experience optimization. Would users like to see the snippet on the page? Do they land? Do they convert well? Are they happy? Do they want to bookmark it, tell their friends about it, come back to it? All those kinds of questions.”

    “Unfortunately, SEO does have this kind of connotation for a lot of people, and we’ve seen it in media, like CSI type shows where somebody says they’re an SEO and people have this ‘worthless shady criminals’ kind of view – somebody called SEOs that, and I don’t know how to escape that, because there are a few people who are black hats, who hack sites and give the whole field a bad name, and there are a few people who sell snake oil, who give the field a bad name. And unless people drive those guys out of our midst, we’re gonna have this somewhat bad, shaky reputation for SEO,” he says.

    “At the same time, if you change the name to something else, all the people will just come along, and a few of those will be bad actors as well,” says Cutts. “If you have a few bad apples then that will sort of change the reputation of whatever new name you pick, so in my personal opinion, the best way to tackle it would be, you know, think about it in broad terms, or maybe think about how can we differentiate the great stuff that people do making their site faster, more accessible, helping people with keyword research, all that sort of stuff – marketing in different ways.”

    Do you think SEO should get a new name? What would you call it?

  • Google Panda Update Refresh Coming In A Week Or So [Report]

    Earlier this month, Google announced that it rolled out a data refresh for the Panda update. It sounds like they’re getting ready to launch another one in about a week (or maybe a bit longer).

    Barry Schwartz at Search Engine Roundtable was talking about a Panda refresh possibly having occurred over the weekend. That didn’t happen, but Google did reportedly tell him that they’re planning on launching one in seven to ten days, “if all goes on according to plan.”

    It’s not very often that we hear about Panda updates or data refreshes before Google actually launches them (maybe they should start pre-announcing them more often). Webmasters can at least brace themselves and/or get prepared for a new one.

    Of course, at this point, you’ve had plenty of time to prepare. You know what Panda does, and what its purpose is. If you’re putting out the kind of content that Google has discussed repeatedly (with regards to how it views low quality), and you’ve managed to escape thus far, it’s probably only a matter of time before the Panda update gets you. You should be taking quality seriously if you want to continue to perform well in Google search results.

    Frankly, you’ve got enough other battles to deal with in that war, even if you do produce high quality content. If your’e not producing Panda-friendly content to begin with, the odds are not stacked in your favor.

    For more on all things Panda, peruse our coverage here.

  • Matt Cutts On How Quickly You Should Hear Back About Reconsideration Requests

    Google’s Matt Cutts has posted a new Webmaster Help video talking about reconsideration requests – specifically, how long they should take. In the video, he responds to the following question:

    I’ve been waiting for 2 months to hear back regarding a reconsideration request. Is this normal? There is no one I can contact about it.

    He says that’s not normal, and that you could show up in the webmaster forum and ask what’s going on.

    “What I would do is, I would actually do another reconsideration request, and I would mention, ‘Hey, I didn’t hear back. What’s going on here?’” he says.

    “When you do a reconsideration request, you should get a sort of confirmation message pretty quickly that lets you know we got the reconsideration request,” says Cutts. “If you don’t see that, then maybe something went wrong in the submission – the form didn’t go through or something along those lines. Much faster than two months – the backlog can vary, so it can be a week, or it can be several days if we have a lot of people all doing reconsideration requests, maybe after we just started sending out a new type of message, for example.”

    “You should hear back with one of roughly three different replies,” he continues. “The replies are basically: yes, we think you’re in good shape so your reconsideration request has been granted; it might be no, we think you still have some work to do, and so that’s the sort of thing where it’s like, okay, you need to keep improving the site; it can also, in some situations, be you don’t have any sort of manual issue at all, and you should hear back very quickly about that.”

    “Sometimes, you flip the coin and you don’t land on heads – yes, or tails – no,” he says. “You sort of get the very side of the coin, and in that in case, you’ll get something that says we have processed your reconsideration request. Typically what that means is there might have been multiple issues. So maybe one issue is resolved, but there’s still another issue or we moved from something where we thought the entire domain was not as good to maybe we’re more granular. So that just mans, okay, there’s still some issues, but more of them have been resolved.”

    Last month, Cutts did another video about reconsideration requests in which he said the company was experimenting wiht ways to make them better. Here are some additional tips on reconsideration requests from Google.

  • Bing Gets Webmaster Guidelines, Here’s What They Say About Paid Links

    Bing announced that, like Google, it now has its own set of Webmaster Guidelines.

    “As we update these guidelines over time, we’ll post notices here at the blog to let folks know to review the changes,” says Bing’s Duane Forrester. “Changes should be infrequent as these current Webmaster Guidelines cover most major topics. They are not exhaustive and you should not expect to find deep, technical answers in them. They are intended to help most business owners understand the broad strokes of search marketing.”

    The guidelines can be found in Bing’s Webmaster Help center under Content Guidelines. There are sections on: 404-Pages Best Practices, Link Building, Marking Up Your Site with Structured Data, Markup: People and Markup: Products and Offers.

    Pay attention to these, and maybe you can avoid getting hit by some future Bing version of the Penguin update (though if you follow Google’s, which have been in place for years (though they have been updated), you’ll most likely be safe in Bing too. Still, something about Bing’s guidelines seem less threatening.

    For example, here’s the section on link buying from Bing’s:

    You may choose to buy a link from a trusted website. This is your choice, but you should know that as Bing begins to see a pattern of links from one website turning off each month, then new ones showing up for a month or so from the same domain to new websites, we begin to understand the website is selling links. Thus, any links leaving that website will be suspect. Search engines are very good at seeing patterns so think carefully before purchasing a link in search of elevated rankings.

    That said, buying a link on a busy website can bring you direct traffic, so it does remain a valid marketing tactic. Just be careful how often you employ this tactic lest Bing form the impression you’re buying links to try to influence your organic rankings.

    Bing also has a series of Webmaster webinars coming up. Check out the blog post for the schedule.

  • SEO Shows Google Results Can Be Hijacked

    People have been claiming to see scrapers of their content showing up in Google search results over their own original content for ages. One SEO has pretty much proven that if you don’t take precautions, it might not be so hard for someone to hijack your search result by copying your content.

    Have you ever had your search results hijacked? Scrapers ranking over your own original content? Let us know in the comments.

    Dan Petrovic from Dejan SEO recently ran some interesting experiments, “hijacking” search results in Google with pages he copied from original sources (with the consent of the original sources). Last week, he posted an article about his findings, and shared four case studies, which included examples from MarketBizz, Dumb SEO Questions, ShopSafe and SEOmoz CEO Rand Fishkin’s blog. He shared some more thoughts about the whole thing with WebProNews.

    First, a little more background on his experiments. “Google’s algorithm prevents duplicate content displaying in search results and everything is fine until you find yourself on the wrong end of the duplication scale,” Petrovic wrote in the intro to his article. “From time to time a larger, more authoritative site will overtake smaller websites’ position in the rankings for their own content.”

    “When there are two identical documents on the web, Google will pick the one with higher PageRank and use it in results,” he added. “It will also forward any links from any perceived ’duplicate’ towards the selected ‘main’ document.”

    In the MarketBizz case, he set up a subdomain on his own site, created a single page by copying the original HTML and images of the content he intended to hijack. The new page was +’ed and linked to from his blog. The page replaced the original one in the search results, thanks to a higher PageRank and a few days for Google to index the new page.

    In the Dumb SEO Questions case, he tested whether authorship helped against a result being hijacked. Again, he copied the content and replicated it on a subdomain, but without copying any media. The next day, the original page was replaced with the new page in Google, with the original being deemed a duplicate. “This suggests that authorship did very little or nothing to stop this from happening,” wrote Petrovic.

    In the Shop Safe case he created a subdomain, and replicated a page, but this time the page contained rel=”canonical”. The tag was stripped from the new page. The new page overtook the original in search, but it didn’t replace it when he used the info: command. +1’s had been removed after the hijack to see if the page would be restored, and several days later, the original page overtook the copy, Petrovic explained.

    Finally, in the Rand Fishkin case, he set up a page in similar fashion, but this time, but “with a few minor edits (rel/prev, authorship, canonical)”. Petrovic managed to hijack a search result for Rand’s name and for one of his articles, but only in Australian searches. This experiment did not completely replace the original URL in Google’s index.

    Rand Fishkin results

    If you haven’t read Petrovic’s article the article, it would make sense to do so before reading this. The subject came up again this week at Search Engine Land.

    “Google is giving exactly the right amount of weight to PageRank,” Petrovic tells WebProNews. “I feel they have a well-balanced algorithm with plenty of signals to utilise where appropriate. Naturally like with anything Google tries to be sparing of computing time and resources as well as storage so we sometimes see limitations. I assure you, they are not due to lack of ingenuity within Google’s research and engineering team. It’s more to do with resource management and implementation – practical issues.”

    The Dumb SEO Questions example was interesting, particularly in light of recent domain-related algorithm changes Google has made public. In his findings, Petrovic had noted that a search for the exact match brand “Dumb SEO Questions” brought the correct results and not the newly created subdomain. He noted that this “potentially reveals domain/query match layer of Google’s algorithm in action.”

    Petrovic believes there is still significant value to having an exact match domain. “Exact match domains were always a good idea when it comes to brands, it’s still a strong signal when it it’s a natural situation, and is now more valuable than ever since Google has sweeped up much of the EMD spam,” he says.

    Here’s what industry analyst Todd Malicoat had to say on the subject in a recent interview.

    Regarding the Fishkin experiment, Petrovic tells us, “Google’s perception of celebrity status or authority are just a layer in the algorithm cake. This means that if there is a strong enough reason Google will present an alternative version of a page to its users. There goes an idea that Wikipedia is hardcorded and shows for everything.”

    When asked if freshness played a role in his experiments, he says, “Yes. Freshness was a useful element in my experiments, but not the key factor in the ‘overtake’ – it’s still the links or should I say ‘PageRank’. I know this surprised a lot of people who were downplaying PageRank for years and making it lame to talk about it in public.”

    “This article was me saying ‘stop being ignorant,’” he says. “PageRank was and is a signal, why would you as an SEO professional ignore anything Google gives you for free? The funniest thing is that people abandon PageRank as a ridiculous metric and then go use MozRank or ACRank as an alternative, not realising that the two do pretty much the same thing, yet [are] inferior in comparison.”

    “To be fair, both are catching up with real PageRank, especially with Majestic’s ‘Flow Metrics’ and the growing size of SEOMoz’s index,” he adds.

    Petrovic had some advice for defending against potential hijackers: use rel=”canonical” on your pages, use authorship, use full URLs for internal links, and engage in content monitoring with services like CopyScape or Google Alerts, then act quickly and request removals.

    He also wrote a follow up to the article where he talks more about “the peculiar way” Google Webmaster Tools handles document canonicalization.

    So far, Google hasn’t weighed in on Petrovic’s findings.

    What are your thoughts about Petrovic’s findings? Share them in the comments.

  • Matt Cutts On How Google Interprets Links To URLs Ending With Campaign Tags

    Google is really pumping out the Webmaster Help videos lately. Today’s features Matt Cutts responding to the following user question:

    Will Google interpret links to URLs ending with a campaign tag like ?hl=en (www.example.com?hl=en) as a link to www.example.com or to a completely different page? What bout the SEO effect of inbound links?

    “The team that really does the core indexing does a great job of canonicalizing, which is picking from different URLs and combining them together in the right way,” says Cutts. “So, if you’re using sort of standard URL endings – URL parameter tags, tracking tags, stuff like that, often times we’ll be able to detect that those are the same page, and that they should really be combined in some way.”

    “If that’s not the case, we like to talk about the KISS rule (the Keep It Simple Stupid rule). If you don’t trust a search engine to get it right, you do have a lot of different options,” he says. “So, you can always, for example, use rel=”canonical” whenever you land on a particular page. If it’s a tracking URL, and you don’t want it in the index at all, in theory, you could record that that particular landing page was hit on the server, and then do a 301 to whatever the final page is going to be, and we also provide a free tool in Google Webmaster Tools Webmaster Console at Google.com/webmasters, that basically lets you say, ‘These URL parameters matter. These URL parameters don’t matter.’ So, when you see a URL with a particular set of parameters, you can strip these parameters out, and you’ll still get the same content.”

    “So if you do use something non standard, and you see it being an issue, maybe a URL showing up twice in Google’s search results, that is something where I’d recommend checking out our URL parameter tool or consider using rel=”canonical” or a 301 redirect,” Cutts concludes.

    This has little to do with the actual subject at hand, but I found it somewhat amusing how horrible the YouTube transcript was for this video. For example, it says:

    “So the crawl team the team that really does the court indexing they do a great job of canonical ie sandwiches picking from different your elves and combining them together and the right way so you think sort of standard you were all indian side you were a printer tax cutting taxes like that…”

    Just a heads up. You may actually want to watch these things rather than rely on the transcripts.

  • Matt Cutts On How Google Handles Site-Wide Links Both Algorithmically And Manually

    If you’re interested in how Google treats site-wide backlinks, you’ll be interested in a new Webmaster Help video Google posted today. Matt Cutts takes on the following question:

    Are site-wide backlinks considered good or bad by Google? Or do they just count as 1 link from the whole domain?

    “On the algorithmic standpoint, typically I’ve said before, if we have like keywords – the first keyword counts some, the next keyword counts a little bit, but not as much, the third keyword not as much…so even if you do keyword stuffing – even if you throw a ton of keywords – at some point, it becomes asymptotically diminishing returns, and it doesn’t really help you anymore,” says Cutts. “You can imagine the same sort of thing, you know, if we see a link from a domain, we might count it once, but if we see 50 links from a domain, we still might choose to only count it once. So on an algorithmic side, we do a pretty good job of compressing those links together.”

    “But then there’s also on the manual side,” he continues. “So, imagine that you have a Polish website, and then you see a site-wide link in English talking about, ‘Rent cheap apartments,’ you know. To a regular person, that looks pretty bad. So, certainly it does happen that you have site-wide links – maybe you have a blogroll or something like that, but if I were a manual webspam analyst, sort of doing an investigation, and we got a spam report, you’re an English site, and you’ve got a site-wide Polish link or something like that or vice versa, it looks commercial or it looks off-topic, low-quality or spammy, then that can affect the assessment on whether you want to trust the out-going links from that site.”

    See more recent videos from Cutts here.

  • Matt Cutts Talks Guest Blogging (Again) And Article Spinning

    Last month, Google put out a Webmaster Help video about guest blogging, and its effects on links. Now, Matt Cutts (who discussed the topic in that video) has appeared in another related video. This time, he responds to the following question:

    Currently, guest blogging is the favorite activity of webmasters for link acquisition. Due to its easy nature, lots of spammy activities are going on like article spinning etc. Is Google going to hammer websites for links acquired by guest blogging?

    “It’s funny, because I did a video – another video recently about guest blogging, and it was sort of like saying, ‘Well, can’t it be an okay activity?’ and I was sort of saying, ‘Well, if you get a really high quality blogger it can, but this is the flip side,” he says. “And I want to sort of specifically address it as well. If you were doing so many guest blogs that you’re doing article spinning, and likewise, if you’re allowing so many guest bloggers that you allow things like spun blogs, where people aren’t really writing real content of their own, then that is a pretty bad indicator of quality.”

    “If your website links to sites that we consider low quality or spammy, that can affect your site’s reputation, so the short answer is yes,” says Cutts. “Google is willing to take action if we see spammy or low quality blogging, guest blogging, whatever you want to call it. It’s basically just placing low quality articles on the site. And so, I would be cautious about using that as a primary link acquisition strategy, and if you have a website where you’ll just let anybody post, probably the kinds of links that you get embedded in those articles, as a result, might affect your site’s reputation. So, do think about that.”

    See Matt’s previous video on the topic here.

  • Google Talks About Optimizing For Tablets

    If your site isn’t optimized for different devices that people are using, it’s not going to look good for Google when they point users to your site. This is something to keep in mind, when you’re considering optimizing for mobile search and searches from tablets.

    Google wants to provide a good experience to users, and users will not benefit from a site that doesn’t cater to the device they are using, even if the content is there. Google recognizes this, and it seems fairly likely that they will take measures to keep your site from showing up if it’s not optimized. Maybe not your site specifically, but this seems like they kind of thing they’d want to improve upon algorithmically.

    Google suggests using responsive design as a way to ensure that your site looks good across devices. They don’t say it will actually help you in search rankings, but considering Googe’s emphasis on user experience, and the fact that they’re even suggesting it, seems to indicate that this is something they’re paying attention to.

    Google has talked about this a number of times in the past. Here are some steps they provided earlier this year, for example. The next month, they shared more advice. Now, they’re talking about making sure you give tablet users the full-sized web, emphasizing that you should not be showing these users a mobile-specific site.

    “When considering your site’s visitors using tablets, it’s important to think about both the devices and what users expect, say Google’s Pierre Far and Scott Main in a joint blog post. “Compared to smartphones, tablets have larger touch screens and are typically used on Wi-Fi connections. Tablets offer a browsing experience that can be as rich as any desktop or laptop machine, in a more mobile, lightweight, and generally more convenient package. This means that, unless you offer tablet-optimized content, users expect to see your desktop site rather than your site’s smartphone site.”

    “Our recommendation for smartphone-optimized sites is to use responsive web design, which means you have one site to serve all devices,” they write. “If your website uses responsive web design as recommended, be sure to test your website on a variety of tablets to make sure it serves them well too. Remember, just like for smartphones, there are a variety of device sizes and screen resolutions to test.”

    They also note that another approach is to have separate sites for desktops and smartphones, and just to redirect users to the relevant version. Just make sure you’re sending tablet users to the desktop version.

    Still, tablets are coming in a variety of sizes these days. Some of them are getting quite small. Responsive design might be the best bet.

    The two note that they “do not have specific recommendations for building search engine friendly tablet-optimized websites.”

  • Google Pushes New Toolbar PageRank Update

    Google updates toolbar PageRank about four times a year. They updated it in August, and now they’ve updated it again. As you would expect, some sites are going up, and some are going down, but in both cases, webmasters are noticing.

    As usual, people are taking to Twitter to voice their “excitement”.

    This toolbar mythbusting article from 2010 by SEOBullshit has gotten a couple of tweets. Search Engine Roundtable points to some forum threads where the update is being discussed.

    Toolbar PageRank doesn’t mean a lot, but people still like to monitor it. Here’s the obligatory Matt Cutts video in which he talks about toolbar PageRank:

  • Matt Cutts Talks Parked Domain Content

    Matt Cutts Talks Parked Domain Content

    Google has released a new Webmaster Help video. It’s another one of those in which Matt Cutts answers his own question. This time, it’s:

    I have a parked domain and want to launch a new website on it. Are there any pitfalls I should avoid? Should I keep my domain parked or put some sort of stub page there?

    “Google does have a parked domain detector,” says Cutts. “You’ve probably seen this – where you land on the page, and there’s the lady with the backpack smiling at you, and it’s like, ‘Click here to learn about whatever,’ and those pages aren’t as useful. Users don’t like to see them, and they complain when they do see them, so we do have a parked domain detector that we run, and then when we detect that a page is parked, or a domain is parked, then we try not to show those pages in our search results.”

    “The fact is that if you leave your domain parked right up until you launch, it might take a little while for us to recrawl that page and reprocess it, and for the parked domain detector to really believe that it’s no longer parked,” he continues. “So, my advice advice would be, once you buy a domain, if you do intend to put something there (you know, a month, a few weeks…whatever, beforehand), just write a paragraph or two or three, and say, ‘This will be the future home of xyz. We’re going to be the world’s number one source of red widgets or blue widgets or green widgets,’ or whatever it is that you’re planning to do.”

    “Even if it’s mysterious, just make sure that you write a paragraph of text or two,” he adds. “It’s not just an empty page or like a completely empty web template, because we do try to detect that sort of behavior.”

    He notes that if there is already some kind of content, Google won’t have to learn that the page is not parked, when you’re actually ready to launch.