WebProNews

Tag: SEO

  • Google Reportedly Expanding ‘Not Provided’ To 3rd-Party Paid Search

    Update: OK, this just happened.

    Last month at the Search Engine Marketing Expo (West), Google’s Singhal said there would soon be an announcement related to changes with the controversial “not provided” issue.

    Google implemented secure search a few years ago, and by doing so, stopped providing publishers with keywords searchers use to find pages on their sites. It has, however, continued to show such data to advertisers, which is one of the controversial parts. The apparent double standard has often been brought up by members of the SEO industry, but historically Google has pretty much brushed it off.

    Singhal didn’t specify what Google would be announcing, but his words seemed to suggest that getting rid of the data for advertisers may have been the news.

    Now, reports are coming out that Google is taking the paid search data away from third-parties. A.J. Ghergich (via Search Engine Journal) says Google will cease supplying 3rd parties with paid search query data, but that reports within AdWords will remain unaffected.

    “This will also have an affect on website analytics packages but we’ve not yet heard about anything with Google Analytics,” he writes. “Services that use this query data may have no way to access it anymore.”

    He says that his sources received a notice about the change directly from Google, and that he has read the document himself. The change, he says, is expected in the next few weeks.

    If this is really all Singhal was talking about, it’s not going to do much to curb criticism over the double standard accusations. Google has maintained that the switch to not provided is about user privacy, but has continued to give it to those willing to pay.

    Image via YouTube

  • Google Wants To Get Better At Indexing Your Business Info

    Google has released some recommendations for webmastesr to help them get the search engine to identify and surface business info like phone numbers, business locations, and opening hours. They also launched support for schema.org to help specify preferred phone numbers using structured data markup.

    “Many people also turn to Google to find and discover local businesses, and the best information is often on a website’s contact us or branch locator page. These location pages typically include the address of the business, the phone number, opening hours, and other information,” says Google in a blog post.

    “In addition to building great location pages, businesses are encouraged to continue using Places for Business, which is a fast and easy way to update your information across Google’s service such as Google Maps, the Knowledge Graph and AdWords campaigns,” it adds.

    You can find the recommendations for location pages for local businesses and organizations here. It goes into how to have each location’s info accessible, how to let Googlebot discover, crawl and index location pages, Havascript and other page assets, how location info should be presented, and using schema.org markup.

    Schema.org supports four types of phone numbers: customer service, technical support, billing support, bill payment. For each one, you must indicate if it’s toll-free, suitable for the hearing-impaired, and whether it’s global or only for specific countries. More on all this here.

    Image via Google

  • An Update (Kind Of) On How Google Handles JavaScript

    The latest Google Webmaster Help video provides an update on where Google is on handling JavaScript and AJAX. Well, an update on where they were nearly a year ago at least.

    Matt Cutts responds to this question:

    JavaScript is being used more and more to progressively enhance content on page & improve usability. How does Googlebot handle content loaded (AJAX) or displayed (JC&CSS) by Javascript on pageload, on click?

    “Google is pretty good at indexing JavaScript, and being able to render it, and bring it into our search results. So there’s multiple stages that have to happen,” Cutts says. “First off, we try to fetch all the JavaScript, CSS – all those sorts of resources – so that we can put the page under the microscope, and try to figure out, ‘Okay, what parts of this page should be indexed? What are the different tokens or words that should be indexed?’ that sort of thing. Next, you have to render or execute the JavaScript, and so we actually load things up, and we try to pretend as if a real browser is sort of loading that page, and what would that real browser do? Along the way, there are various events you could trigger or fire. There’s the page on load. You could try to do various clicks and that sort of thing, but usually there’s just the JavaScript that would load as you start to load up the page, and that would execute there.”

    “Once that JavaScript has all been loaded, which is the important reason why you should always let Google crawl the JavaScript and the CSS – all those sorts of resources – so that we can execute the page,” he continues. “Once we’ve fetched all those resources, we try to render or execute that JavaScript, and then we extract the tokens – the words that we think should be indexed – and we put that into our index.”

    “As of today, there’s still a few steps left,” Cutts notes. “For example, that’s JavaScript on the page. What if you have JavaScript that’s injected via an iframe? We’re still working on pulling in indexable tokens from JavaScript that are accessible via iframes, and we’re getting pretty close to that. As of today, I’d guess that we’re maybe a couple months away although things can vary depending on engineering resources, and timelines, and schedules, and that sort of thing. But at that point, then you’ll be able to have even included Javascript that can add a few tokens to the page or that we can otherwise index.”

    It’s worth noting that this video was recorded almost a year ago (May 8th, 2013). That’s how long it can take for Google to release these things sometimes. Cutts notes that his explanation reflects that particular point in time. We’re left to wonder how far Google has really come since then.

    There’s that transparency we’re always hearing about.

    He also notes that Google’s not the only search engine, so you may want to think about what other search engines are able to do. He also says Google reserves the right to put limits on how much it’s going to index or how much time it will spend processing a page.

    Image via YouTube

  • Here’s A Look At Demand Media’s Latest Google-Proofing Efforts [Updated]

    Here’s A Look At Demand Media’s Latest Google-Proofing Efforts [Updated]

    Article Updated: See the end.

    As reported last month, Demand Media is now extending its content creation services to brands, publishers and agencies, essentially selling the content it has been using to get search traffic, monetizing it thorough brand partnership-based content marketing since taking a beating from Google’s algorithm.

    If you’re not familiar with Demand Media’s history with Google, the tl;dr version is: DM created tons of content based on things people search for in order to get traffic on those searches. Often, this content was of questionable quality. Google eventually changed its algorithm (the Panda update) in a way that made it so this type of content didn’t rank so well, and ultimately impacted DM’s revenue. DM has made numerous changes to its content, design and strategy since, and has at times appeared to have recovered, but in recent months, the company’s Google fortune has not been great, and as others have tried to do, they are seeking business that is not so reliant on the search behemoth.

    So as part of its business, it now sells content for others to host on their sites. DigiDay is pointing to some specific examples of what exactly they’re offering. Here’s an article on Samsung.com about “Organizing Ideas for Small Homes”.

    For Michelob Ultra:

    Pretty standard DM-style content.

    It just so happens that Google is now going after guest blogging (controversially I might add). We have to wonder if Google will target stuff like this. I’m not sure this stuff is doing all that well in search anyway (I don’t see Michelob Ultra ranking anywhere for “anti-aging foods,” for example)., but it would be surprising if Google didn’t at least have its eye on it.

    Google is also working on algorithm changes that reward authorities on topics, so Michelob Ultra probably doesn’t stand to gain a lot on authority on health foods.

    It doesn’t look like the Demand Media stories are including any links in the author bios. Google’s attack on guest blogging has some reputable sites afraid to keep such links natural (meaning without nofollow added) or at all.

    These types of articles could potentially do well in social media, though Facebook has not been kind to brands with its own algorithm changes, and is steadily decreasing the organic reach of Page posts, so it’s hard to say how much value brands are really getting out of this content. Of course there are other ways to generate traffic besides Google and Facebook.

    Either way, Demand Media seems to be helping overcome its reputation for low-quality content. DigiDay spoke with VP of Marketing Kristen Moore, who basically said brands have been skeptical about the content, but the company has been showing them the newer content compared to the old, and convincing them. They “get past it,” she says.

    It sounds like Demand Media is also managing to get some mileage out of its existing content, rather than having to have it all created from scratch for these brands. According to the report, Moore cited Demand’s “loads of evergeen content and SEO expertise” as an advantage it has over other content providers.

    Hmm. SEO expertise. Perhaps Google is still being targeted after all. Or maybe it’s a Bing strategy.

    Update: We reached out to Moore for further comment. “We aren’t in a position to comment on Google’s or any other company’s business practices. We focus on creating the best experience for consumers on our sites and providing the best content for brands to meet their content marketing needs,” she tells us.

    “Each of those brands come to us with very different goals and with different marketing plans on how they use their content,” she adds. “In the DigiDay article I was cited as saying SEO expertise was an advantage – what I really said was that we help brands by offering consultation on their distribution and discovery strategies.”

    Currently, she says, they only have a “handful of brands” they’re working with, as it’s a pretty new offering.

    Images via Samsung, Michelob

  • Matt Cutts Talks About Coming Google Algorithm Changes

    Google seems to have announced some coming changes to its algorithm in the latest “Webmaster Help” video. Head of webspam Matt Cutts said the search engine is working on some changes that will help it better determine when a site is an authority on a topic. He didn’t give any specific dates or anything, but says he’s “looking forward to to those rolling out.”

    Do you think Google is good at determining which sites are authorities on certain topics right now? Do you expect these changes to lead to better results? Let us know what you think in the comments.

    The topic came up when Blind Five Year Old asked Cutts, “As Google continues to add social signals to the algorithm, how do you separate simple popularity from true authority?”

    Cutts says in the video that the first part of that question makes an “assumption” in that Google is using social signals in its ranking algorithm. The rest of the time, he talks more about authority vs. popularity more generally, and doesn’t really get into social signals at all.

    He did recently talk about Facebook and Twitter signals in another video. More on that here. CEO Larry Page has also talked about social signals in search in the past.

    Regarding popularity versus authority, Cutts says, “We’ve actually thought about this quite a bit because from the earliest days it would get us really kind of frustrated when we would see reporters talk about PageRank, and say, ‘PageRank is a measure of popularity of websites,’ because that’s not true.”

    He goes on to talk about how porn sites are popular because a lot of people go to them, but not a lot of people link to them, and how on the other hand, a lot of people link to government websites, but not as many go to them. They want the government sites to have authority, but porn sites not so much.

    “You can separate simple popularity from reputation or authority, but now how do we try to figure out whether you’re a good match for a given query?” Cutts continues. “Well, it turns out you can say, take PageRank for example – if you wanted to do a topical version of PageRank, you could look at the links to a page, and you could say, ‘OK, suppose it’s Matt Cutts. How many of my links actually talk about Matt Cutts?’ And if there are a lot of links or a large fraction of the links, then I’m pretty topical. I’m maybe an authority for the phrase Matt Cutts.”

    “It’s definitely the case that you can think about not only taking popularity, and going to something like reputation, which is PageRank, but you could also imagine more topical…’Oh, you’re an authority in the medical space” or ‘You’re an authority in the travel space’ or something like that. By looking at extra signals where you could say, ‘Oh, you know what? As a percentage of the sorts of things we see you doing well for or whatever, it turns out that your links might be including more anchor text about travel or about medical queries or something like that,’ so it is difficult, but it’s a lot of fun.”

    Then we get to the part about the upcoming algorithm changes.

    “We actually have some algorithmic changes that try to figure out, ‘Hey, this site is a better match for something like a medical query, and I’m looking forward to those rolling out, because a lot of people have worked hard so that you don’t just say, ‘Hey, this is a well-known site, therefore it should match for this query.’ It’s ‘this is a site that actually has some evidence that it should rank for something related to medical queries,’ and that’s something where we can improve the quality of the algorithms even more.”

    If they actually work, these changes could indeed provide a boost to search result quality. In fact, this is just the kind of thing that it seemed like the Panda update was originally designed to do. Remember how it was initially referred to as the “farmer” update because it was going after content farms, which ware saturating the search results? Many of those articles from said farms were drowning out authoritative sites on various topics.

    There is supposed to be a “next generation” Panda update hitting sometime as well, though Cutts didn’t really suggest in the video that this was directly related to that. That one, he said, could help small businesses and small sites.

    After the initial Panda update, Google started placing a great deal of emphasis on freshness, which led to a lot of newer content ranking for any given topic. This, in my opinion, didn’t help things much on the authority side of things. Sometimes more authoritative (or frankly relevant) content was again getting pushed down in favor of newer, less helpful content. I do think things have gotten a bit better on that front over maybe the past year or so, but there’s always room for improvement.

    It’s interesting that Google is looking more at authority by topic now, because Cutts has also been suggesting that blogs stay on topic (I guess whatever topic Google thinks you should be writing about) at least when it comes to guest blog posts. As you may know, Google has been cracking down on guest blog posts, and when one site was penalized, Cutts specifically suggested that the topic of one post wasn’t relevant to the blog (even though most people seem to disagree with that).

    Either way, this is another clue that Google really is looking at authority by topic. It seems like it might be as good a time as any to be creating content geared toward a specific niche.

    Do you think these algorithm changes will help or hurt your site? Will they improve Google’s search results? Let us know what you think in the comments.

  • Matt Cutts Does His Best HAL 9000

    In the latest “Webmaster Help” video, Google’s Matt Cutts takes on a question from “Dave,” who asks, “When will you stop changing things?”

    “Look, I’m sorry, Dave, but I can’t do that,” he replies.

    Yes, the quote is actually, “I’m sorry, Dave. I’m afraid I can’t do that,” (at least in the movie) but we’re pretty sure that’s what Cutts was going for.

    He goes on to explain that Google is always going to keep changing. Breaking news, I know.

    Also, his shirt changes colors throughout the video.

    Image via YouTube

  • Google Adjusts Index Status Data In Webmaster Tools

    Google announced an adjustment to the way sites’ index status data appears in Webmaster Tools. The index status feature now tracks a site’s indexed URLs for both HTTP and HTTPS as well as for verified subdirectories. In the past, it didn’t show data for HTTPS sites independently. Everything was included in the HTTP report.

    The move makes a great deal of sense as more and more sites move over to HTTPS (at least partially), and according to the company, people have been asking for this change.

    Google’s John Mueller said, “If you’re a data-driven SEO (or just love to see how your site’s indexed), you’ll love this change.”

    Now, each of these will show their own data in the Webmaster Tools Index Status report as long as they’re each verified separately:

    http://www.example.com/
    https://www.example.com/
    http://example.com
    https://example.com
    http://www.example.com/folder/
    https://www.example.com/folder/
    http://example.com/folder/
    https://example.com/folder/

    Google notes that if you have a site on HTTPS or if some of your content is indexed under different subdomains, you’ll see a change that looks something like this:

    “In order to see your data correctly, you will need to verify all existing variants of your site (www., non-www., HTTPS, subdirectories, subdomains) in Google Webmaster Tools. We recommend that your preferred domains and canonical URLs are configured accordingly,” says Google’s Zineb Ait Bahajji. “Note that if you wish to submit a Sitemap, you will need to do so for the preferred variant of your website, using the corresponding URLs. Robots.txt files are also read separately for each protocol and hostname.”

    You can read up more on all of this here.

    Image via Google

  • Is Google More Focused On Penalties Or Positive Features?

    While it’s nothing new, a lot of webmasters are frustrated with Google for penalties their sites have received. The recent attack on guest blog posts has sparked a whole new round of outcries. Google says, however, that it tries to focus more on proactive and positive features, and less on penalties. You wouldn’t know that to read conversations that are happening every day on search blogs and forums, but that’s the stance Google is taking. Meanwhile, Google thinks we’re too bored to want to see lists of algorithm changes (which would presumably include some of these “positive” and “proactive” things). It used to release these regularly.

    We also keep seeing tweets from Matt Cutts about how Google is taking action on various networks. It doesn’t exactly convey a lack of focus on penalties.

    Do you think Google really is more focused on proactive features than it is on penalizing other sites? Let us know what you think.

    So you know how Google penalized a site and cited one link from a guest post that was on a topic that Google didn’t think belonged on the site (even though the site owner felt it did, and most other people can see the natural fit in topic)?

    Danny Sullivan wrote an article about that, which this guy shared on Twitter, saying that Google penalties have “jumped the shark”. Matt Cutts responded:

    Danny also jumped in, and Matt again:

    One that note about Google focusing less on penalties and more on proactive, positive stuff like natural language, Aaron Wall threw up a survey.

    Here’s what it’s showing as of Thursday:

    Maybe perception would be different if Google hadn’t stopped putting out those monthly lists of algorithm updates, which might have illustrated some of that natural language-type stuff more. Maybe.

    Cutts also had to defend Google from comparisons to the Emprie in Star Wars.

    Do you think Google really is more focused on adding positive features to its search engine as opposed to penalizing sites? Let us know in the comments.

    Note: This article has been updated to include more context and tweets.

    Image via PollDaddy

  • Cutts On How Google Views “Sister Sites”

    In the latest “Webmaster Help” video from Google, Matt Cutts takes on the following question:

    Is there any way Google identifies “sister” websites? For example, relationships between eBay.co.uk and eBay.com? Does linking from one to the other taken as paid or unnatural? And I’m strictly talking about good, genuine ccTLDs for businesses?

    “It is the case that we try to interpret as best we can the relationships there are on the web,” he says. “At the same time it’s very helpful if you can tell us a little bit about what your sites are so that we ca return the correct content to users regardless of which country they’re coming from. So let’s look at the spectrum. On one hand, you’ve got ebay.co.uk and ebay.com, and we need to know that those are somehow related, and then on the other hand, we’ve got all the way down to somebody who has a hundred different websites all about medical malpractice or something like that.”

    On the ccTLD case, he adds, “It is the case that we try to figure out that those sites are related, but we are doing the best we can, and if we get a little bit more help, then we can say, ‘Oh, this is a German user. They should get ebay.de or yoursite.de.’ If it’s a French user, they should get the .fr version…that sort of thing. So the best way to help is to use something called hreflang. You can do that inside of a webpage, where you can mark up, ‘Hey, on ebay.com, a French version of this page is over here, and the German version of this page is over here, or if you don’t want to have that in all the different pages on your site, you can also make a sitemap. And you can just say, ‘Okay, over here is one version for a country, here’s another version for a country.’”

    He says doing this is really helpful because Google tries to determine where users are coming from, what their language is, and then show them the best version of your page. If you tell Google what the right versions are, they’re less likely to screw it up.

    He cautions that they might or might not trust links between any given sites on “any given basis.” For the most part, he says, however, that he wouldn’t worry about them being seen as paid or unnatural, because it’s pretty normal.

    He does advise against linking to all versions of the the site in the footer because it looks spammy. I’m pretty sure he’s covered all this before.

    When the sites aren’t about different languages or countries, and you have a bunch of sites, then he says you should be a lot more careful about your linking.

    Image via YouTube

  • PSA: The Topics You Include On Your Blog Must Please Google

    Have you thought about branching out in different directions for your blog content? It might not be a great idea if you’re worried about staying on Google’s good side. No, it would appear that you need to stay focused on what you’re already known for, or at least stay within the confines of what Google thinks your site is supposed to be. That is when it comes to having guest authors on your site.

    This seems to be the message Google is sending with a recent Twitter exchange between Matt Cutts and a respected SEO who found himself penalized.

    Is Google going too far with this stuff? Let us know what you think in the comments.

    Wouldn’t you think that you’d want guest authors for different topics that you’re not used to writing about? You know, like experts on said topics? It would seem that if you do this, you’re going to want to make sure their links are nofollowed if you want to avoid Google’s wrath. The problem with this is that these experts have less of an incentive to write a guest post if they’re not going to get any credit for their links. I guess that’s the point as far as Google is concerned, and for spammy posts, perhaps it makes sense, but what about legitimate posts? These are cases when some would argue that those links SHOULD count for something.

    If you write an article for a reputable site, and that reputable site vouches for your article enough to publish it, then why shouldn’t you get some credit for having your link on that site? Isn’t that a signal that you are an authority, and that your site should reflect that?

    It’s no secret that Google has launched an attack against guest blogging. Since penalizing MyBlogGuest earlier this month, Google has predictably reignited the link removal hysteria.

    More people are getting manual penalties related to guest posts. SEO Doc Sheldon got one specifically for running one post that Google deemed to not be on-topic enough for his site. Even though it was about marketing. Maybe there were more, but that’s the one Google pointed out.

    The message he received (via Search Engine Roundtable) was:

    Google detected a pattern of unnatural, artificial, deceptive or manipulative outbound links on pages on this site. This may be the result of selling links that pass PageRank or participating in link schemes.

    He shared this in an open letter to Matt Cutts, Eric Schmidt, Larry Page, Sergey Brin, et al. Cutts responded to that letter with this:

    To which Sheldon responded:

    Perhaps that link removal craze isn’t so irrational. Irrational on Google’s part perhaps, but who can really blame webmasters for succumbing to Google’s pressure to dictate what content they run on their sites when they rely on Google for traffic and ultimately business.

    Here’s the article in question. It’s about best practices for Hispanic social networking. It’s on a blog called Doc Sheldon’s Clinic: “Content Strategy, SEO Copywriting, Tools, Tips & Tutorials.” Sheldon admitted it wasn’t the highest quality post in the world, but also added that it wasn’t totally without value, and noted that it wasn’t affected by the Panda update (which is supposed to handle the quality part algorithmically).

    Cutts’ tweet didn’t indicate that the problem was with the quality of the post (which might have been a fairer point), but that a post on that subject didn’t belong on his blog. Combine that with the crackdown on guest posting, and a lot of blogs and bloggers might be in for some very interesting times in the near future.

    I have a feeling that link removal craze is going to be ramping up a lot more.

    Ann Smarty, who runs MyBlogGuest weighed in on the conversation:

    Update: As a reader pointed out in the comments, Google seems to be sending webmasters contradictory messages about “unrelated” content. Google’s John Mueller recently said this about the Disavow tool: “Just to be completely clear on this: you do not need to disavow links that are from sites on other topics.”

    That carries the connotation that Google isn’t that concerned about links coming from unrelated sites. So why are they concerned about content on your site that they feel is unrelated to other content on your site? And frankly, who are they to decide what kind of content mix you can offer?

    Maybe this is all being blown out of proportion, but that was a pretty bold tweet from Cutts. People discussing it over at Inbound.org think it’s downright “insane”. There are also some good points in that discussion about nofollow links telling users and search engines two different things, which Google typically advises against.

    What do you make of this mess? Discuss.

    Image via YouTube

  • Matt Cutts Gives SEO Tip For Disavow Links Tool

    Google’s Matt Cutts randomly tweeted a tip about the Disavow Links tool. Don’t delete your old file if you upload a new one because it “confuses folks,” and the last thing you’d want to do is confuse Google if you’re trying to fix problematic links.

    Here’s what he said exactly:

    The Disavow Links tool has come up in the SEO conversation several times this month. In early March, we heard about Google’s “completely clear” stance on disavowing “irrelevant” links.

    Then, a couple weeks ago, Cutts said that you should go ahead and disavow links even if you haven’t been penalized in some cases.

    Later still, Google’s John Mueller said that Google doesn’t use data from the tool against the sites whose URLs are being disavowed.

    Image via YouTube

  • Google Granted The Panda Patent?

    Google has been granted what could be its patent for the controversial Panda update. It was filed on September 28, 2012, a year and a half after the update first launched, and awarded this week on March 25th.

    It may or may not be related to the actual Panda update, though Search Engine Land seems pretty sure that it is with the headline “Google Granted Patent For Panda Algorithm”. Update: They’ve now updated the story to say, “This patent may have nothing to do with the Panda algorithm, to be fair.”

    Here’s the abstract for the patent on “Ranking Search Results”:

    Methods, systems, and apparatus, including computer programs encoded on computer storage media, for ranking search results. One of the methods includes determining, for each of a plurality of groups of resources, a respective count of independent incoming links to resources in the group; determining, for each of the plurality of groups of resources, a respective count of reference queries; determining, for each of the plurality of groups of resources, a respective group-specific modification factor, wherein the group-specific modification factor for each group is based on the count of independent links and the count of reference queries for the group; and associating, with each of the plurality of groups of resources, the respective group-specific modification factor for the group, wherein the respective group-specific modification for the group modifies initial scores generated for resources in the group in response to received search queries.

    The patent holds the name of apparent Panda update author Navneet Panda along with that of Vladimir Ofitserov.

    I’ll leave it to the Internet’s authority on Google patents, Bill Slawski, to explain, as he says he’ll be digging into it more.

    Last month marked the three-year anniversary of the original Panda update. It has since been integrated with Google’s indexing, and is no longer announced each time it rolls out. Google has indicated it launches roughly once a month.

    The update has harmed many businesses, and forced some to “Google-proof” their sites.

    Google recently said it’s working on the “next generation” of the Panda, which it said will be kinder to small businesses and small sites. We’ll see.

    Image via Wikimedia Commons

  • Google’s Attack On Guest Blogging Reignites Irrational Link Removal Craze

    Remember the days when you could build some nice solid links by writing guest posts on other people’s blogs and publications? Well, it’s entirely possible that those days are still here, but Google is freaking people out once again in its efforts to crack down on so-called webspam.

    Have you written guest blog posts in the past? Are you worried about links from those coming back to haunt you? Let us know in the comments.

    Google says it’s taking action on guest blogging, and people that have written completely legitimate guest posts are seeking the removal of links to their sites that may have actually been helping them. In some cases, it’s hard to see why they would possibly hurt.

    As you may know, Google’s Matt Cutts announced last week that Google took action on a “large guest blog network”.

    That network turned out to be Ann Smarty’s MyBlogGuest. A lot of people didn’t think her site should have been penalized, but even that is somewhat beside the point. It is a site dedicated to matching guest bloggers with blogs as an “Internet marketing tactic”.

    When Cutts announced that they’d taken action, he referenced a blog post he made earlier this year in which he proclaimed guesting blogging for SEO “done”.

    After he first made the post, he added an update, which lightened the guidance to a much less aggressive picture than was first painted (or at least that’s the way people took it in the beginning). He wrote:

    There are still many good reasons to do some guest blogging (exposure, branding, increased reach, community, etc.). Those reasons existed way before Google and they’ll continue into the future. And there are absolutely some fantastic, high-quality guest bloggers out there. I’m also not talking about multi-author blogs. High-quality multi-author blogs like Boing Boing have been around since the beginning of the web, and they can be compelling, wonderful, and useful.

    I just want to highlight that a bunch of low-quality or spam sites have latched on to “guest blogging” as their link-building strategy, and we see a lot more spammy attempts to do guest blogging. Because of that, I’d recommend skepticism (or at least caution) when someone reaches out and offers you a guest blog article.

    Okay, fair enough.

    But now we’re seeing people who have written high quality content as guest posts request to have their links removed. Very shortly after last week’s announcement, we had multiple emails from guest authors of the past looking to have the links in their author bios removed.

    I’m not going to share their names, but these were not low-quality, or in any way spammy articles. If they had been, we wouldn’t have accepted them. One in particular was about a very specific piece of legislation that was a particularly hot topic at the time it was written, and brought interesting insight to the discussion. That’s why we published it. We made the “editorial choice” (to use a phrase Cutts often uses) to put these articles on our site. The articles weren’t published elsewhere (it was a requirement that they be unique to our site in the first place), so it wasn’t like they were all over the web as duplicate content.

    In fact, when asked, one of the writers told us that Google had simply told them they had taken a manual action against their site after detecting “some artificial links”. Our article in question wasn’t mentioned in any way in Google’s messaging, but this person told us they were deciding themselves to have all keyword-oriented links pointing to their site removed.

    Are these specific links hurting these sites? Probably not, but who can really say (other than Google) when Google decides that something looks spammy. I can’t imagine what would have looked spammy about these particular links. They were pretty standard author bio stuff like you see on just about every article on every site on the Internet, but Google creates this fear, and people go out of their way to remove legitimate links as a precautionary measure. Let’s hope that these links weren’t really working in the site’s favor. Then they’re just going to hurt themselves more by losing PageRank value.

    Keep in mind, Links are still a particularly important signal in search quality. Cutts said this himself very recently. One could argue that a link in an author bio shouldn’t carry as much weight as a link referencing some other piece of information within an article itself, but that really depends on the situation, doesn’t it? A link to the author’s website gives you context about who’s writing the article, which can lend credibility. If someone’s writing an article about a topic, it’s nice to see that they have experience with it. On the flipside, how often do you see generic words in articles linked for no apparent reason? It happens all the time. Is someone linking the word Wikipedia to the Wikipedia site, for example, some big signal of relevance for the Wikipedia homepage? Probably not, unless the article is specifically about Wikipedia and or its homepage. Everyone knows what Wikipedia is. That link isn’t adding any value. Not as much value as the link in the author’s bio, which is showing you more about who you’re reading. If the link is to a specific Wikipedia article that’s relevant to what the author is talking about, then that’s a different story.

    I don’t know how Google was viewing the specific links in question with regards to these guest articles. I don’t see any reason for them to look unfavorable, but Google isn’t always the easiest thing to understand, and plenty of people have felt illegitimately burned by Google’s wrath in the past. Maybe it is smart for these people to get rid of the links. It’s hard to say.

    But it’s likely that this is only the latest in Google spreading a similar kind of link hysteria to what we’ve seen in recent years when people were doing things like trying to get links removed from StumbleUpon. Or when they were afraid to link to their own sites.

    Google has a lot of power on the web, but never forget that Google isn’t the web itself. It’s still links that connect the web’s pages.

    Should people look to have links from legitimate guest blog posts removed? Let us know what you think in the comments.

    Image via YouTube

  • Cutts On Determining If You Were Hit By An Algorithmic Penalty

    If you’ve ever lost your search engine rankings to a competing site, you may have wondered if you were suffering from an algorithmic penalty from Google or if your content simply wasn’t as good as your competitors’. You’re not the only one.

    Google’s Matt Cutts takes on this question in the latest “Webmaster Help” video:

    How can you tell if your site is suffering from an algorithmic penalty, or you are simply being outgunned by better content?

    First he addresses manual penalties. Make sure that’s not what you’re dealing with by checking Webmaster Tools. You’ll get a notification if so, and then you can go from there. He also notes you can learn about crawl errors in WMT. Look for that kind of stuff. But if that seems all well and good, then you might want to think about the algorithm.

    “It’s tough because we don’t think as much or really much at all about algorithmic ‘penalties,’” Cutts says. “Really, the webspam team writes all sorts of code, but that goes into the holistic ranking that we do, and so if you’re affected by one algorithm, you call it a penalty, and if you’re affected by another algorithm, do you not call it a penalty, is a pretty tough call to make, especially when the webspam team is working on more and more general quality changes – not necessarily things specifically related to webspam – and sometimes general quality people work on things that are related to webspam, and so deciding which one to call which is kind of hard to do.”

    Webmasters might get a better idea of what exactly they’re dealing with if Google still provided its monthly lists of algorithm changes, but they think the world was “bored” with those, so they’re not putting them out anymore.

    “We rolled out something 665 different changes to how we rank search results in 2012,” Cutts continues. “So on any given day, the odds that we’re rolling out some algorithmic change are pretty good. In fact, we might be rolling out a couple if you just look at the raw number of changes that we’re doing. However, when we see an algorithmic change that we think will have a pretty big impact, we do try to give people a heads up about that. So for example, the Penguin algorithm, which is targeted towards webspam or the Panda algorithm, which is targeted towards quality content on the web…whenever we have large-scale changes that will affect things, then we tend to do an announcement that ‘Oh yeah, this changed,’ or ‘You should look at this particular date,’ and that can be a good indicator to know whether you’re affected by one of those sort of jolting algorithms that has a big impact.”

    Lately, they’ve mostly been announcing manual penalties, such as on link networks and on guest blogging sites.

    He continues, “What you’ve seen is, for example, Panda has become more and more integrated into indexing, and it’s had less of a jolting impact, and in fact we’ve gotten it so that it changes the index on a pretty regular basis, and it’s build into the index rather than rolling out on a certain day, and so it’s less useful to announce or talk about Panda launches at this point, whereas Penguin is still a switch that flips or is something that starts rolling out at a discreet time, and so we’re a little more willing to talk about those, and let people know and have a little heads up, ‘Hey, you might be affected by the Penguin algorithm.’”

    I think people would still be interested in knowing just when Panda is rearing its head, even if it’s getting “softer” in its old age. Again, even those monthly lists would be helpful. Cutts did say recently that Panda updates happen roughly once a month.

    “In general, if your site is not ranking where you want it to rank, the bad news is it’s a little hard and difficult to say whether you’d call it a penalty or not. It’s just part of ranking,” he says. “The good news is it is algorithmic, and so if you modify your site…if you change your site…if you apply your best guess about what the other site is doing that you should be doing or that it is doing well, then it’s always possible for the algorithms to re-score your site or for us to re-crawl and re-index the site, and for it to start ranking highly again. It’s kind of tricky because we have large amount of algorithms that all interact…”

    A large number of algorithms that webmasters used to get hints about via monthly lists of algorithm updates that Google is no longer providing.

    Image via YouTube

  • How Googlebot Treats Multiple Breadcrumbs On E-Commerce Pages

    Google has a new “Webmaster Help” video out about e-commerce pages with multiple breadcrumb trails. This is the second video in a row to deal specifically with e-commerce sites. Last time, Matt Cutts discussed product pages for products that are no longer available.

    This time, he takes on the following question:

    Many of my items belong to multiple categories on my eCommerce site. Can I place multiple breadcrumbs on a page? Do they confuse Googlebot? Do you properly understand the logical structure of my site?

    “It turns out, if you do breadcrumbs, we will currently pick the first one,” he says. “I would try to get things in the right category or hierarchy as much as you can, but that said, if an item does belong to multiple areas within your hierarchy it is possible to go ahead and have multiple breadcrumbs on a page, and in fact that can, in some circumstances, actually help Googlebot understand a little bit more about the site.”

    “But don’t worry about it if it only fits in one, or if you’ve only got breadcrumbs for one,” Cutts continues. “That’s the way that most people do it. That’s the normal way to do it. We encourage that, but if you do have the taxonomy (the category, the hierarchy), you know, and it’s already there, and it’s not like twenty different spots within your categories…if it’s in a few spots, you know, two or three or four…something like that, it doesn’t hurt to have those other breadcrumbs on the page. And we’ll take the first one. That’s our current behavior, and then we might be able to do a little bit of deeper understanding over time about the overall structure of your site.”

    For more about how Google treats breadcrumbs, you might want to take a look at this page in Google’s webmaster help center. In fact, it even gives an example of a page having more than one breadcrumb trail (Books>Authors>Stephen King and Books>Fiction>Horror)

    Image via YouTube

  • Google Takes Action On Guest Blogging

    Google Takes Action On Guest Blogging

    Google has been warning webmasters about guest blogging for quite a while, but now, the search engine is getting serious.

    Will going after “guest blogging networks” improve Google’s search results? Let us know what you think.

    Head of webspam Matt Cutts tweeted early Wednesday morning that Google has taken action on a large guest blog network, and reminded people about “the spam risks of guest blogging”.

    That link points to a post from January on Matt’s personal blog where he proclaimed that “guest blogging is done.” He later clarified that he meant guest blogging specifically for SEO.

    He didn’t specify which network Google just took action on, but Pushfire CEO Rae Hoffman suggested that MyBlogGuest appeared to be the “winner”.

    Still, from where we’re sitting, the site is in the top three for its name, appearing only under its own Twitter and Facebook pages.

    MyGuestBlog owner Ann Smarty confirmed, however, that her site was indeed penalized.

    The site promises on its homepage, “We don’t allow in any way to manipulate Google Rankings or break any Google rules.” It does promise bloggers a way to build links, which everyone knows is a key signal in Google’s ranking algorithm (Cutts recently said links are still “super important”).

    Barry Schwartz at Search Engine Land points out that Smarty wrote a blog post after Cutts’ January post, saying her network wouldn’t nofollow links. She wrote:

    MyBlogGuest is NOT going to allow nofollow links or paid guest blogging (even though Matt Cutts seems to be forcing us to for whatever reason).

    Instead we will keep promoting the pure and authentic guest blogging concept we believe in.

    She went on to note that she is an SEO who stopped depending on organic rankings a long time ago.

    “I believe in the Internet and its ability of giving little people (like myself) the power of being heard. I can say, I don’t care about Google,” she wrote. “I don’t think Google is THE Internet.”

    She’s right, and one can’t help but admire her attitude, but one also can’t help but wonder how many of those utilizing the network have that attitude.

    The phrase, “Play with fire, and you get burnt” also comes to mind. Google isn’t the Internet, but how much are people spending time and effort writing guest blog posts depending on it?

    Apparently Smarty does care about Google after all. Bill Hartzer writes that she told him before Cutts made the announcement, “I really hope that they don’t target MyBlogGuest. There are other guest blogging networks that should targeted, such as PostJoint, a paid guest blogging network. MylLogGuest is not a paid network.”

    It stands to reason that Google is going to be going after more of these types of sites the way it has been doing with other link networks.

    Smarty, a well-respected SEO veteran, and MyGuestBlog are getting some support from the webmaster community.


    Do you think MyBlogGuest deserved a Google penalty? Should Google be this concerned with guest blogging? Let us know in the comments.

    Note: This article has been updated since Smarty confirmed the penalty and more has emerged.

    Image via YouTube

  • ‘Disavow Links’ Data Not A Google Ranking Signal…Yet

    Google is not using data from its Disavow Links tool to hurt sites that are being disavowed in search results. That is according to Google’s John Mueller.

    Do you think data from the Disavow Links tool should be used as a ranking signal? Let us know in the comments.

    The topic came up in the Google Webmaster Central product forum (via Search Engine Roundtable). One webmaster started the thread, saying that they received an email from a site with the subject line of “Link Removal Request” which said:

    Dear Web master,

    We recently received a notice from Google stating that they have levied a penalty on our website as they “detected unnatural links” redirecting to our website.

    The only way we can remove this penalty and help Google reconsider putting our website back in their index is by removing these links and we need your help for the same. We request you to consider this request on high priority.

    Following are the details of the links:
    they have given me list of Links of my website with majority comments links .

    We would like to bring your notice that failure to remove these links would require us to file a “Disavow Links” report with Google. Once we submit this report to Google, they may “flag” your site as”spammy” or otherwise if anything is not in compliance with their guidelines. The last thing we want is to have another web master go through this grief!

    Your cooperation in this process would be deeply appreciated. We kindly request you to send us an acknowledgement of this mail along with a confirmation that these links have been removed.
    Thanks a lot for your help.

    If you want to reach out to us mail us on ‘webmaster’s copany email id’

    Regards,
    name of person
    website name

    So no, Google will not “flag your site as spammy” if it’s disavowed.

    Mueller says flat out, “They are wrong. Having URLs from your website submitted in their disavow file will not cause any problems for your website. One might assume that they are just trying to pressure you. If the comment links they pointed to you are comment-spam that was left by them (or by someone working in their name) on your website, perhaps they are willing to help cover the work involved in cleaning their spam up?”

    Maybe they are “pressuring the webmaster,” but still, Google has actually hinted in the past that data from the tool could become a ranking signal.

    In a discussion with Google’s head of web spam Matt Cutts back in 2012, Danny Sullivan asked if “someone decides to disavow links from good sites in perhaps an attempt to send signals to Google these are bad,” if Google is mining the data to better understand what the bad sites are.

    Cutts responded (emphasis mine), “Right now, we’re using this data in the normal straightforward way, e.g. for reconsideration requests. We haven’t decided whether we’ll look at this data more broadly. Even if we did, we have plenty of other ways of determining bad sites, and we have plenty of other ways of assessing that sites are actually good.”

    Like I said at the time, Google does have over 200 signals, but that doesn’t mean there isn’t room for the data to play some role in the algorithm, even if it’s not the weightiest signal. I don’t know how we’ll ever know if Google does decide to start using it. It’s not like Google is listing its algorithm changes every month or anything.

    Cutts added in that conversation, “If a webmaster wants to shoot themselves in the foot and disavow high-quality links, that’s sort of like an IQ test and indicates that we wouldn’t want to give that webmaster’s disavowed links much weight anyway. It’s certainly not a scalable way to hurt another site, since you’d have to build a good site, then build up good links, then disavow those good links. Blackhats are normally lazy and don’t even get to the ‘build a good site’ stage.”

    It does sound like a pretty dumb strategy, and probably not the most effective way to hurt another site. On the other hand, people do dumb stuff all the time.

    But in a more natural sense, mightn’t this data say something about a site? If a lot of people are disavowing links from the same sites, doesn’t that say something?

    But if it were to become a signal it could be misleading at times when Google’s unnatural link warnings have so many people scrambling to get all kinds of links (including legitimate ones) removed. It certainly shouldn’t carry too much weight if it ever does make it into the algorithm.

    SEO analyst Jennifer Slegg said it well: “People who have been affected with bad links will very likely take a very heavy-handed approach to the links they disavow in their panic of seeing their traffic drop off a cliff. There is no doubt that some of those good links that are actually helping the site will end up in the list along with poor quality ones because the webmaster is either unclear about whether a link is a bad influence, or just think the starting fresh approach is the best one to go with.”

    In the comments section of the Search Engine Roundtable post, Durant Imboden makes an interesting point: “Isn’t it possible that an unusually high number of disavowals might trigger a manual review of the frequently-disavowed site? In such a case, the disavow tool itself wouldn’t trigger a penalty or other ‘problems for your website,’ but the resulting review might (depending on what was found).”

    Either way, don’t worry about the tool sending any signals about your site for the time being.

    In related news, Cutts spoke about the tool at SMX West last week, where he said that if you’re aware of bad links to your site, you should probably go ahead and disavow them anyway, even if you’re not already penalized. He added on Twitter (when Rae Hoffman tweeted about it), that if it’s one or two links, it may not be a big deal, but the closer it gets to “lots,” the more worthwhile it may be.

    Something to think about.

    Do you think Google should ever include data from the tool in its ranking algorithm? Share your thoughts.

  • What You Should Do For Google On Product Pages For Products That Are No Longer Available

    What You Should Do For Google On Product Pages For Products That Are No Longer Available

    Google has a new “Webmaster Help” video out, which many ecommerce businesses may find useful. Head of webspam Matt Cutts discusses what to do on your product pages for products that are no longer available.

    Specifically, he answers this user-submitted question:

    How would Google recommend handling eCommerce products that are no longer available? (Does this change as the number of discontinued products outnumbers the active products?)

    He runs down a few different types of cases.

    He begins, “It does matter based on how many products you have and really what the throughput of those products is, how long they last, how long they’re active before they become inactive. So let’s talk about like three examples. On one example, suppose you’re a handmade furniture manufacturer – like each piece you make you handcraft, it’s a lot of work – so you only have, ten, fifteen, twenty pages of different couches and tables, and those sorts of shelves that you make. In the middle, you might have a lot more product pages, and then all the way on the end, suppose you’re craigslist, right? So you have millions and millions of pages, and on any given day, a lot of those pages become inactive because they’re no longer, you know, as relevant or because the listing has expired. So on the one side, when you have a very small number of pages (a small number of products), it probably is worth, not just doing a true 404, and saying, you know, this page is gone forever, but sort of saying, ‘Okay, if you are interested in this, you know, cherry wood shelf, well maybe you’d be interested in this mahogany wood shelf that I have instead,’ and sort of showing related products. And that’s a perfectly viable strategy. It’s a great idea whenever something is sort of a lot of work, you know, whenever you’re putting a lot of effort into those individual product pages.”

    “Then suppose you’ve got your average e-commerce site. You’ve got much more than ten pages or twenty pages,” Cutts continues. “You’ve got hundreds or thousands of pages. For those sorts of situations, I would probably think about just going ahead and doing a 404 because those products have gone away. That product is not available anymore, and you don’t want to be known as the product site that whenever you visit it, it’s like, ‘Oh yeah, you can’t buy this anymore.’ Because users get just as angry getting an out-of-stock message as they do “no results found’ when they think that they’re going to find reviews. Now if it’s going to come back in stock then you can make clear that it’s temporarily out of stock, but if you really don’t have that product anymore, it’s kind of frustrating to just land on that page, and see, ‘Yep, you can’t get it here.’”

    He goes on to discuss the Craigslist case a little more, noting that Google has a metatag that sites can use called “unavailable_after”. Here’s the original blog post where Google announced it in 2007, which discusses it more.

    The tag basically tells Google that after a certain date, the page is no longer relevant, so Google won’t show it in search results after that.

    Image via YouTube

  • Google: Go Ahead And Disavow Links Even If You Haven’t Been Penalized

    Google suggests you go ahead and use its Disavow Links tool if you know of bad links you have out there, even if Google has not penalized you. If you only have a couple, don’t worry about it, but the more you have, the more you’ll want to do it (again, according to Google).

    Here’s what Google’s head of webspam Matt Cutts told Rae Hoffman about it on Twitter:

    Okay, if you “know” you have “bad” links, why not? The problem is that people often don’t know which ones are actually “bad,” and as we’ve seen in the past, people go hog-wild on getting backlinks removed just because they’re afraid Google won’t like them, regardless of whether or not there is evidence of this.

    Cutts also said when the Disavow Links tool came out that most people shouldn’t use it.

    Here he is talking about when you should worry about your links. Should you spend time analyzing your links and trying to remove ones you didn’t create that look spammy? The “simple answer is no.”

    And here’s Google’s “completely clear” stance on disavowing irrelevant links.

    Image via YouTube

  • Google Is Working On The ‘Next Generation’ Of Panda

    Google’s Matt Cutts spoke at the Search Marketing Expo on Thursday, and reportedly said that the search team is working on the “next generation” of the controversial Panda update, which will be softer and more friendly to small sites and businesses.

    Do you expect Google to really be more friendly to small sites? Let us know in the comments.

    Barry Schwartz at SMX sister site Search Engine Land, who was in attendance at the session, has the report. Here’s an excerpt:

    Cutts explained that this new Panda update should have a direct impact on helping small businesses do better.

    One Googler on his team is specifically working on ways to help small web sites and businesses do better in the Google search results. This next generation update to Panda is one specific algorithmic change that should have a positive impact on the smaller businesses.

    Video of the session is not up yet on the SMX YouTube channel as of the time of this writing.

    Related tweets out of the conference:

    Apparently Cutts also said that Panda updates are monthly, and Penguin updates are up to six months between roll-outs:

    And you should be able to recover in two to three months:

    And here’s the apparent reason they don’t announce them anymore:

    It’s unclear when this “next generation” Panda will start taking effect. Schwartz thinks it will be at least two or three months, but he admits that is only speculation. Chances are we won’t know about it, since Google isn’t announcing them anymore and they happen so frequently.

    Too bad the “world got bored” with those monthly lists of algorithm changes Google used to put out. Otherwise, maybe we would get a clue.

    Last month, the Panda update turned three years old. Nice to know it’s getting softer in its old age. Still, it’s not stopping businesses from building Google-proof models. Even Demand Media, which recently suffered again from Google’s algorithm, has found a new way to monetize its army of writers.

    Do you think the Panda update can help your site going forward? Share your thoughts in the comments.

    Image via YouTube

  • Google: Expect Announcement Related To ‘Not Provided’

    Google’s Amit Singhal had a discussion with Danny Sullivan on stage at SMX West on Tuesday evening. Danny has now shared a section of that (above) in which the controversial “not provided” subject comes up. Singhal says there may soon be an announcement related to some changes with that – specifically with how Google is currently handling this for organic vs. paid search.

    In case you have no idea what I’m talking about, Google implemented secure search a few years ago, and by doing so, it no longer provided publishers with the keywords searchers were using to find pages for those using it. It has, however, continued to show this data to advertisers.

    This fact has been brought up repeatedly (often by Sullivan), but Google hasn’t had a lot to say for itself, which is why these new comments from Singhal are pretty interesting. He said (via Sullivan):

    Over a period of time, we [Google’s search and ad sides] have been looking at this issue…. we’re also hearing from our users that they would want their searches to be secure … it’s really important to the users. We really like the way things have gone on the organic side of search.

    I have nothing to announce right now, but in the coming weeks and months as [we] find the right solution, expect something to come out.

    Just what “comes out” remains to be seen, but it seems unlikely that publishers will be getting those keywords back. More likely is that advertisers will lose the data.

    Image via YouTube