WebProNews

Tag: SEO

  • Cutts Says Google Has Taken Action On Another Link Network

    Google has been warning that it would be cracking down on link networks in Germany for a couple weeks or so. First, when Matt Cutts said Google was taking action on French link network Buzzea, he noted that Germany was next on the list of places the search engine would be looking at.

    Then, earlier this week, Google put out a blog post about paid links on its German Webmaster Central blog, and Cutts once again tweeted a reminder.

    Now, Cutts has confirmed that Google has already taken action on one of these German networks, and has more in its sights.

    Google has not really been shy about calling out link networks by name thus far, so it’s kind of interesting that he didn’t name any names this time. That doesn’t mean he won’t. It sounds like some more of them will be getting similar treatment in the immediate future.

    Here’s an old video Cutts impersonating a dinosaur:

  • Google Propaganda, SEO and Why Marketers Need to Wake Up

    As the entire search world knows, Matt Cutts released a post last week – I’m paraphrasing – warning us that Google now considers “low quality guest posting” to be spam under their guidelines and will begin to take action in accordance with those beliefs.
    (more…)

  • How This Panda Victim Google-Proofed His Business

    How This Panda Victim Google-Proofed His Business

    Earlier this week, famous web entrepreneur Jason Calacanis unleashed his new app Inside. It’s a news aggregation service, which has proven immediately addictive to fans. While the app itself is an interesting enough story in its own right, there’s another story here that has a moral a lot of online businesses should pay attention to: you need a business that’s Google-proof.

    Does your business rely on Google for traffic? Would you survive if Google de-indexed you? How can you avoid getting killed an algorithm update? Share your thoughts in the comments.

    If you rely primarily on Google, you’re always at the mercy of its algorithm. One unfavorable move and your business could be evaporated, or at the very least, badly damaged. “Diversify your traffic sources” has been a mantra for many since Google unleashed the Panda update in 2011, victimizing many content providers, including Calacanis.

    If you’ll recall, his site Mahalo was among the more well-known victims, and the update led to Calcanis reducing his staff by 10%. Mahalo was never really able to recover from Panda, even after some tweaks to strategy, like relying more on YouTube videos.

    So now he’s taken the team and investors from Mahalo, and created something new. Something Google-proof. It may not sit well with some of the old media crowd (the crowd that still frowns upon Google News), but he’s created something that Google shouldn’t be able to harm.

    Here’s what Inside does: It finds news stories, and employs humans to summarize them in 40 words or 300 characters, and links to the source. It’s really quite simple, but its approach is somewhat refreshing because more than anything else it’s about getting you as much information as possible in as little time as possible. You can consume the gist of countless stories in a relatively short amount of time, and click through to read the stories you really want to know more about from the original source (or at least the source Inside is pointing to). You can browse the “top news,” “all updates,” or your personalized feed, which is constructed of updates based on topics you’ve added (and there are countless topics).

    Inside

    Calacanis has indicated that Inside tries to reduce the friction that wastes people’s time – things like linkbaiting, slideshows and listicles – and just give you the actual need-to-know info from each story. It’s touted mainly as a mobile app, and has launched for iOS and Blackberry. Android is on the way. You don’t really need the app at all though. I’ve been using the web version on Android all week. Just add the bookmark to your homescreen, and you might as well be using the app. It works just fine. It also works just fine from the desktop.

    Inside Desktop

    In an interview with Re/code, Calcanis said it wasn’t worth it to continue to invest in Mahalo because they were “at the mercy of Google’s algorithm or YouTube’s revenue split.”

    “Mahalo made a lot of money, actually, before Google de-indexed us, and really beat us up with their Google search update. But we have plenty of money left, so as an entrepreneur, having great success with Mahalo then a really bad turn of events, we were left with still making millions of dollars, still having a great team, and decided to create a new product.”

    “I don’t hate Google,” he said. “I’m very frustrated with Google. I would be honest. I think they’re good people. I just don’t think they know how to treat partners well.”

    He may not “hate” Google, but it’s clear that there is still some animosity. His comments extended to Twitter.

    Calacanis talked more about Google-proofing his business in an interview with Staci Kramer at Nieman Journalism Lab. He said he doesn’t think building an “SEO-driven” business works anymore, adding that businesses can’t rely on Google not to steal their business “like they did to Yelp and others.” Here’s an excerpt:

    To make a Google-proof company, I wanted to have a killer brand that people would remember and come to like — a product so compelling that it has a repeatable effect. The problem at Mahalo or eHow is you use it for two hours to get your baking recipe, then you don’t use it again for two months — then you use it again for putting up curtains. You really rely on people going to Google.

    With news, people will go directly to a site, which makes it impervious to Google. And the app ecosystem is also impervious to Google. They can’t control apps even though they have a big footprint in Android, nor have they shown a propensity to control the app ecosystem on Android. I think they would get a revolt on their hands if they did. We’re also adding an email component to this.

    So email, social, and apps are three things that Google can’t control. This is very social — people will share. It’s mobile — Google can’t control that. The email function Google moderately can control.

    He later noted that he sees Inside as “kind of analogous to how Google used to run,” in that Google would point you to places on the web and drive traffic, as opposed to “keeping the traffic for themselves”.

    To that point, Google is showing some users a new type of stock search result (which one of his Tweets above references). It removes the links to competitors like Yahoo Finance and MSN Money. We’re still seeing the old version, which does include links, and have reached out to Google to see if this is just a test or if it is rolling out to everyone. Google has not returned our request for comment as of the time of this writing. It’s interesting that Google would make such a move given that it’s currently being scrutinized for allegedly anti-competitive practices in Europe, where it has already offered up concessions including giving competitors more links and visibility.

    Calacanis isn’t the only one to have been hurt by Google to launch a Google-proof product this week. Rap Genius also has a new app out. Unlike Calacanis’ Mahalo, Rap Genius wasn’t a victim of Panda, but got busted engaging in what Google deemed to be a “link scheme” earlier this month. The site was penalized, but was quickly (and controversially) able to climb out of the penalty box.

    Either way, the mobile app gives it a Google-proof alternative to relying on Google traffic, which could stop flowing on any given day. Calacanis also had comments about this:

    Building a Google proof business is easier said than done though. Inside and Rap Genius are examples that make sense as separate apps that shouldn’t need to rely on traffic from Google. Others will continue to try and recover their existing sites from Google penalties and updates, which may be the only logical route in many cases.

    But if there’s a clear way to get around needing Google, you’d be well-advised to pursue that. If there’s not a “clear” way, you may want to put some thought into other possible directions.

    As far as Inside is concerned, there’s no guarantee that it’s going to be a hit, but early reviews have been positive, and it’s pretty easy to see potential monetization strategies via in-stream ads, not unlike those on Facebook or Twitter. Calacanis has already said the model is perfect for this, by the way, though it’s unclear when ads might come.

    It’s very early, but Calacanis may have a winner here. A Google-proof winner.

    What do you think of Calacanis’ strategy for non-Google reliance? How hard is it to get away from dependence on the search giant? Share your thoughts in the comments.

    Lead image via Wikimedia Commons

  • Google To Writers: Don’t Upload Articles To Directories

    Google has put out a new “Webmaster Help” video advising webmasters and writers against submitting articles to online article directories. It’s been pretty well-known that Google isn’t incredibly fond of these types of sites for a while now, but the search team is still getting questions about it, so head of webspam Matt Cutts had some advice to share.

    “I think over time, article directories have gotten a little bit of the worst name,” says Cutts. “So just to refresh everybody’s memory, an article directory is basically where you write three, four, or five hundred words of content, and then you’ll include a little bio or some information about you at the bottom of the article, and you might have say three links with keyword-rich anchor text at the bottom of that article, and then you can submit that to a bunch of what are known as ‘article directories,’ which then anybody can download, or maybe they pay to download them, and they’ll use them on their own website. And the theory behind that is that if somebody finds it useful, and puts it on their webpage, then you might get a few links.”

    He continues, “Now, in practice, what we’ve seen is this often turns to be a little bit of lower quality stuff, and in fact, we’ve seen more and more instances where you end up with really kind of spammy content getting sprayed and syndicated all over the entire web, so in my particular opinion, article directories and just trying to write one article and just syndicating it wildly or just uploading it to every site in the world, and hoping that everybody else will download it and use it on their website – I wouldn’t necessarily count on that being effective. We certainly have some algorithmic things that would mean that it’s probably a little less likely to be successful now compared to a few years ago, for example. My personal recommendation would be probably not to upload an article like that.”

    Google’s Panda update, launched in 2011, had a particularly devastating effect on a lot of article directory sites. It’s hard to imagine anybody being able to get much out of this kind of article submission in the post-Panda world.

    In fact, Google is even advising against guest blog posts (for SEO) these days for pretty much the same reasons it advises against article directories. Guest blogging, you would think, would tend to cater a little bit more to the higher quality side of things, but that doesn’t appear to be how Google views it.

    Of course, Google’s advice assumes that all the articles you’d upload to a directory would be low quality. There’s no way anyone could ever submit high quality content, right?

    Image via YouTube

  • Google Announces Action On Another ‘Link Network’

    Google has steadily been taking out link networks that violate its quality guidelines whenever possible, and Matt Cutts announced on Twitter that they’ve taken care of yet another one. This time it’s the French network Buzzea.

    Interestingly, Buzzea’s home page now displays this:

    Buzzea

    Buzzea put out a press release addressing the situation. Obviously it’s in French, but here’s a snippet from the Google Translated version:

    Indeed, Buzzea is an advertising network that sells items before providing any relevant content to users and facilitating navigation through the links they contain, which is characteristic of any item online. It is important to understand that advertisers are not only trying to work through their SEO Buzzea. They wanted above all to improve their image and credibility by taking advantage of the reputation of bloggers recognized in their respective fields and their editorial quality, in the image of traditional sponsorship.

    Buzzea The team would like to highlight the fact that we are not the main victim of this decision which will impact on a case by case thousands of publishers removing their perhaps their main source of income, which we deeply regret . It is a sacred step back for a dynamic and passionate blogosphere approached professionalization.

    If you are one of the publishers Buzzea is talking about, it’s probably not a good day for your own rankings.

    In the ongoing link network battle, Cutts indicated on Twitter that Germany will soon be an area of focus.

    Image via Buzzea

  • Google To Well-Established Sites: Don’t Coast On Your Laurels

    Google has released some new advice to webmasters of well-established businesses with domains that have been around for a long time. Keep up with the times, or you’ll be left behind.

    Said advice comes in the form of a “Webmaster Help” video from head of webspam, Matt Cutts. A webmaster asked, “I have been in business for over 14 years with my domain, and see much newer domains passing me. Any algorithms to protect older domains/sites in business from newer sites with more spam?”

    Cutts decided to “answer a slightly different question,” leaving off the “more spam” and simply focusing on the older domains vs. newer domains aspect.

    “The advice that I’d give to you as the owner of a site that’s been around for fourteen years is to take a fresh look at your site,” he says. “A lot of times if you land on your site, and you land on a random website from a search result, you know, even if they’ve been in business for fifteen years, fourteen years, sometimes they haven’t updated their template or their page layout or anything in years and years and years, and it looks, frankly, like sort of a stale sort of an older site, and that’s the sort of thing where users might not be as happy about that.”

    “And so if you do run an older site or a very well-established site, I wouldn’t just coast on your laurels,” he adds. “I wouldn’t just say, ‘Well I’m number one for now, and everything is great,’ because newer sites, more agile sites, more hungry sites, more sites that have a better user experience – they can grow, and they can eclipse you if you don’t continue to adapt, and evolve, and move with the times. So I wouldn’t say just because you are a domain that’s well-established or has been around for a long time, you will automatically keep ranking. We’ve seen plenty of newer domains and businesses bypass older domains.”

    Of course it’s unclear whether or not the person asking the question actually had an old, stale site. There are over 200 signals Google takes into account, but “keeping up with the times” is clearly going to have to be something businesses need to consider if they hope to maintain a significant online presence.

    This probably doesn’t necessarily mean complete design overhauls every year, but perhaps some gradual tweaking is in order as time goes on. What are those outranking you doing better than you?

    Image via YouTube

  • Google Continues To Improve Crawling Capabilities For Smartphone Content

    Throughout 2013 and into 2014, Google has been making various improvements to the way it handles website content on mobile devices. That continues this week with the announcement of a new user-agent for crawling smartphone content.

    Google says it’s retiring “Googlebot-Mobile,” for smartphones, which has been used to refer to various mobile-specific crawlers that index content for both feature phones and smartphones. It will be going away within three to four weeks. After that, the user-agent for smartphones will simply be identified as “Googlebot,” but will list “mobile” elsewhere in the user-agent string.

    Google shows the difference between the new and old user-agents:

    < Google Crawling

    Google says it has seen cases where webmasters have inadvertently blocked smartphone crawling while trying to only block feature phone crawling.

    “This ambiguity made it impossible for Google to index smartphone content of some sites, or for Google to recognize that these sites are smartphone-optimized,” explains Google smartphone search engineer Zhijian He.

    The new Googlebot for smartphones crawler will follow robots.txt, robots meta tag and HTTP header directives for Googlebot as opposed to Googlebot-mobile.

    The update, He says, affects less than 0.001% of URLs, based on Google’s internal analysis.

    Webmasters can, of course, test their sites with the Fetch as Google feature in Webmaster Tools.

    Last month, Google expanded the Crawl Errors feature in Webmaster Tools to help webmasters identify pages on their sites that show smartphone crawl errors. Earlier in 2013, Google made several ranking changes for sites not configured for smartphone users.

    Google also started indexing in-app content, so smartphone users searching can access deep links in mobile apps from the search results page.

    Image via Google

  • Cutts On How Much Facebook And Twitter Signals Matter In Google Ranking

    Google put out a pretty interesting Webmaster Help video today with Matt Cutts answering a question about a topic a lot of people would like to understand better – how Facebook and Twitter affect Google rankings.

    “Facebook and Twitter pages are treated like any other pages in our web index, and so if something occurs on Twitter or occurs on Facebook, and we’re able to crawl it, then we can return that in our search results,” he says. “But as far as doing special, specific work to sort of say, ‘Oh, you have this many followers on Twitter or this many likes on Facebook,’ to the best of my knowledge, we don’t currently have any signals like that in our web search ranking algorithms.”

    “Now let me talk a little bit about why not,” he continues. “We have to crawl the web in order to find pages on those two web properties, and we’ve had at least one experience where we were blocked from crawling for about a month and a half, and so the idea of doing a lot of special engineering work to try and extract some data from web pages, when we might get blocked from being to crawl those web pages in the future is something where the engineers would be a little bit leery about doing that.”

    “It’s also tricky because Google crawls the web, and as we crawl the web, we are sampling the web at finite periods of time. We’re crawling and fetching a particular web page,” he says. “And so if we’re fetching that particular web page, we know what it said at one point in time, but something on that page could change. Someone could change the relationship status or someone could block a follower, and so it would be a little unfortunate if we tried to extract some data from the pages that we crawled, and we later on found out that, for example, a wife had blocked an abusive husband or something like that, and just because we happened to crawl at the exact moment when those two profiles were linked, we started to return pages that we had crawled.”

    Cutts says they worry a lot about identity because they’re “sampling an imperfect web,” and identity is simply hard.

    “And so unless we were able to get some way to solve that impact, that’s where we had better information, that’s another reason why the engineers would be a little bit wary or a little bit leery of trying to extract data when that data might change, and we wouldn’t know it because we were only crawling the web.”

    Funny, because they don’t seem to be that leery about crawling Wikipedia content, which powers much of Google’s Knowledge Graph, and from time to time leads to erroneous or otherwise unhelpful information being presented as the most appropriate answer to your query. Google has, in the past, presented bad Wikipedia info for hours after it was corrected on Wikipedia itself.

    Cutts goes on to say that he’s not discouraging the use of Twitter and Facebook, and that a lot of people get “a ton of value” from both Facebook and Twitter. He also notes that both are a “fantastic avenue” for driving visitors and traffic to your site, letting people know about news and building up your personal brand. Just don’t assume that Google is able to access any signals from them.

    He also says that over a “multi-year, ten-year kind of span,” it’s clear that people are going to know more about who is writing on the web. Google will be more likely to understand identity and social connections better over that time, he says.

    Image via YouTube

  • Expedia Apparently Penalized By Google For Linking Practices

    It looks like Google has penalized travel site Expedia, one its competitors, which is part of the FairSearch Coalition, which regularly lobbies for antitrust action to be taken against the search giant.

    Data from SearchMetrics (via Search Engine Land) shows a 25% drop in Google search visibility for the site, and it has taken a hit on major keywords like: hotels, vacation, airline tickets, etc.

    Expedia search visibility

    Popular belief is that the site was likely hit with an unnatural link penalty. Last month, some of the site’s link tactics drew some negative attention in the SEO industry, and it looks like they may have caught up with the site.

    “What Expedia creates is the huge amount of paid articles filled with fluff content aaaaand…what else?” wrote NenadSEO. “Overly Optimized Anchor Keywords. Yes, that’s what Expedia is doing. They are using Link Schemes to hit the top spots in theSERPs– and they are getting away with this. Nobody is going to punish them.”

    Or maybe they did.

    Google has been cracking down on “link schemes more than ever in recent months”. I’m sure you recall the big Rap Genius story from a couple weeks ago. That site didn’t stay in the penalty box very long, and seems unlikely that Expedia will either, but I guess we’ll see.

    Expedia still ranks for a branded search, which is already better than Rap Genius was doing when it was penalized.

    Earlier I mentioned the FairSearch thing. Interestingly, Expedia has been accused of breaking antitrust laws in the past itself.

    Image via SearchMetrics

  • Matt Cutts Has Declared Guest Blogging For SEO ‘Done’

    Matt Cutts Has Declared Guest Blogging For SEO ‘Done’

    Google has been warning webmasters about guest blogging abuse for years now. This week, head of webspam Matt Cutts basically declared guest blogging dead.

    “Stick a fork in it: guest blogging is done,” says Google’s Matt Cutts.

    In light of Cutts’ comments will you be wary of contributing guest content on other sites? Of publishing guest content on your site? Share your thoughts on the subject.

    Cutts took to his personal blog on Monday to share an email he received from a “content marketer” offering a guest blog post in trade for “a dofollow link or two in the article body,” which Cutts calls a “clear violation of Google’s quality guidelines.”

    Obviously they didn’t realize who they were emailing unless it was a joke.

    Cutts says Google has been seeing more and more reports of this type of thing.

    “Ultimately, this is why we can’t have nice things in the SEO space: a trend starts out as authentic. Then more and more people pile on until only the barest trace of legitimate behavior remains,” he writes. “We’ve reached the point in the downward spiral where people are hawking “guest post outsourcing” and writing articles about “how to automate guest blogging.”

    “So stick a fork in it: guest blogging is done; it’s just gotten too spammy,” he adds. “In general I wouldn’t recommend accepting a guest blog post unless you are willing to vouch for someone personally or know them well. Likewise, I wouldn’t recommend relying on guest posting, guest blogging sites, or guest blogging SEO as a linkbuilding strategy.”

    Early comments were a little critical of this stance. One equated it to “throwing the baby out with the bathwater,” in the sense that this takes too broad a view, and would be detrimental to guest bloggers who actually offer legitimate, quality content on legitimate, quality sites.

    “Maybe Google needs to up their game and ability to decipher what is quality or not,” suggests Matt Sells, who made the baby/bathwater analogy. “Everyone should not be punished for the wrongdoings of some.”

    A bit later, Cutts ended up updating his post, toning down the message significantly.

    He said, “I’m not trying to throw the baby out with the bath water. There are still many good reasons to do some guest blogging (exposure, branding, increased reach, community, etc.). Those reasons existed way before Google and they’ll continue into the future. And there are absolutely some fantastic, high-quality guest bloggers out there. I changed the title of this post to make it more clear that I’m talking about guest blogging for search engine optimization (SEO) purposes.”

    The title now stands as “The Decay and fall of guest blogging FOR SEO”.

    “I’m also not talking about multi-author blogs,” he added. “High-quality multi-author blogs like Boing Boing have been around since the beginning of the web, and they can be compelling, wonderful, and useful. I just want to highlight that a bunch of low-quality or spam sites have latched on to ‘guest blogging’ as their link-building strategy, and we see a lot more spammy attempts to do guest blogging. Because of that, I’d recommend skepticism (or at least caution) when someone reaches out and offers you a guest blog article.”

    The message may have been toned down, but will webmaster reaction? There’s already talk of disavowing links from old guest blog posts. How many will go overboard? How many will seek to have Google ignore perfectly legitimate links they’ve earned by writing high quality content for fear that it will ultimately hurt them in Google, and end up shooting themselves in the foot? You know, like when they were/are getting rid of other legitimate links in hopes that it will somehow make Google think higher of their sites.

    No related Google update was announced or anything, but if such an update were to launch, it would be interesting to follow how well Google could determine what is good vs. what is bad.

    Will you accept any guest posts after this? Let us know in the comments.

    Image via YouTube

  • Sarah Bird Is Now Officially The CEO Of Moz

    A little over a month ago, Rand Fishkin announced that he would step down as CEO of Moz (formerly SEOmoz). He would remain with the company, and focus on product and marketing, while handing over the reins to President and COO Sarah Bird.

    Read our recent interview with Fishkin about the transition here.

    Fishkin told us the transition would come in mid-January, which has now arrived. Bird is now officially CEO.

    The linked post includes a half-hour video about Fishkin’s and Bird’s past work together and plans for the future.

    Image via Moz

  • Matt Cutts Discusses The Google Algorithm

    Google has released a new Webmaster Help video in which head of webspam Matt Cutts discusses the Google algorithm and how it ranks results on a page. More specifically, he responds to the following submitted question:

    Does Google use the same algorithm to rank all the results on page 1, or different algorithms for a wider variety in the results? (Pos. 1-3 = primary focus on freshness; Pos. 4-6 = primary focus on back-links; Pos. 7-10 = primary focus on social signals)

    “I’m only going to concentrate on the web ranking because that’s what I know about, and in general, the rankings are not different for, you know, positions 1-3 and 4-6 and 7-10,” Cutts says. “It’s the same algorithm that returns lots of different web results – you know, 100 or even 1,000, and then with those, we just sort them in order of what we think the trade-off of relevancy vs. reputation. So we want something that’s very relevant, but also as reputable as we can find.”

    “So it’s the same algorithm that generates all of those sorted lists of results, and then that shows up on the first page,” he adds. “So for the most part, for web ranking, it’s not the case that position number 9 is saved for, you know, things based on backlinks or anything like that. It’s the same algorithm that’s generating that list of search results.”

    You may also be interested in this other recent video Cutts did in which he talks about Google’s “How Search Works” site.

    In another video earlier this week, he noted that they had recorded a new batch of videos, so we should be getting plenty more of them in the near future.

  • Google Tweaks Guidance On Link Schemes

    Google has made a subtle, but noteworthy change to its help center article on link schemes, which is part of its quality guidelines dissuading webmasters from engaging in spammy SEO tactics.

    Google put out a video last summer about adding rel=”nofollow” to links that are included in widgets:

    In that, Matt Cutts, Google’s head of webspam, said, “I would not rely on widgets and infographics as your primary way to gather links, and I would recommend putting a nofollow, especially on widgets, because most people when they just copy and paste a segment of code, they don’t realize what all is going with that, and it’s usually not as much of an editorial choice because they might not see the links that are embedded in that widget.”

    “Depending on the scale of the stuff that you’re doing with infographics, you might consider putting a rel nofollow on infographic links as well,” he continued. “The value of those things might be branding. They might be to drive traffic. They might be to sort of let people know that your site or your service exists, but I wouldn’t expect a link from a widget to necessarily carry the same weight as an editorial link freely given where someone is recommending something and talking about it in a blog post. That sort of thing.”

    In Google’s guidance for link schemes, it gives “common examples of unnatural links that may violate our guidelines.”

    It used to include: “Links embedded in widgets that are distributed across various sites.”

    As Search Engine Land brings to our attention, that part now reads: “Keyword-rich, hidden or low-quality links embedded in widgets that are distributed across various sites.”

    That’s a little more specific, and seems to indicate that the previous guidance cast a broader net over such links than what Google really frowns upon. That’s worth noting.

    You’d do well to pay attention to what Google thinks about link schemes, as the search engine has made a big point of cracking down on them lately (even if some have gotten off lightly).

  • Google Webmaster Tools ‘Search Queries’ Feature Gets Some New Tweaks

    Google has announced a couple of changes to the Search Queries feature in Webmaster Tools, improving stats for mobile sites and getting rid of rounding.

    For webmasters who manage mobile sites on separate URLs from the desktop versions (like m.example.com), Google will now show queries where the m. pages appeared in results for mobile browsers and queries where Google applied Skip Redirect.

    Skip Redirect

    “This means that, while search results displayed the desktop URL, the user was automatically directed to the corresponding m. version of the URL (thus saving the user from latency of a server-side redirect),” explains developer programs tech lead Maile Ohye. “Prior to this Search Queries improvement, Webmaster Tools reported Skip Redirect impressions with the desktop URL. Now we’ve consolidated information when Skip Redirect is triggered, so that impressions, clicks, and CTR are calculated solely with the verified m. site, making your mobile statistics more understandable.”

    The change enabling users to see search queries data without being rounded will become visible in Webmaster Tools over the next few days.

    “We hope this makes it easier for you to see the finer details of how users are finding your website, and when they’re clicking through,” says Google webmaster trends analyst John Mueller.

    We wonder if these tweaks are related to Google’s recent call for ideas from users for Webmaster Tools improvements.

    Image: Google

  • Google Accused Of ‘Double Standard’ Over Rap Genius Penalty Lift

    Google is being accused of employing a double standard for its handling of web spam and enabling one penalized site to recover speedily from a manual action. While the search giant has certainly been accused of double standards by angry webmasters many times over the years, the recent case of Rap Genius is generating a lot of discussion, and a bit of fury in the SEO community.

    Rap Genius was able to get out of the Google penalty box after just ten days after being caught in a very public and very obvious link scheme (which Google has been cracking down on vigorously throughout the past year or two). Some think Google has given the site unfair treatment in lifting the penalty so quickly.

    What do you think? Should Rap Genius be back in Google’s rankings so quickly? Share your thoughts in the comments.

    Rap Genius, if you’re unfamiliar with the site, is basically a lyrics site, but adds interpretation from users. It fancies itself a “hip-hop Wikipedia”. Users can listen to songs, read lyrics and click on pop-up annotations on lines of interest.

    Around Christmastime, Google took notice of a practice the site had been engaging in, after John Marbach blogged about it. Rap Genius took to its Facebook page (which has over 530,000 likes, by the way) to give its followers the following message:

    Do you have a blog? Do you wanna be a RAP Genius BLOG AFFILIATE? Help us help you! If you have a blog and are down, email me…

    When Marbach responded out of curiosity, asking for more details, he was greeted with this (image credit: John Marbach):

    Rap Genius email

    So yeah, a pretty blatant link scheme. Something Google isn’t shy about slapping sites over. Except in Rap Genius’ case, it only took ten days to get back into the rankings. This is pretty much unheard of. It’s left many in the industry dumbstruck.

    “Would any other webmaster with such a penalty be able to get back so soon? Doubtful,” says Barry Schwartz of Search Engine Roundtable and Search Engine Land.

    SEOBook’s Aaron wall wrote a scathing post about the situation, noting that the tactic employed by Rap Genius was basically so spammy that most spammers won’t even attempt it:

    Remember reading dozens (hundreds?) of blog posts last year about how guest posts are spam & Google should kill them? Well these posts from RapGenius were like a guest post on steroids. The post “buyer” didn’t have to pay a single cent for the content, didn’t care at all about relevancy, AND a sitemap full of keyword rich deep linking spam was included in EACH AND EVERY post.

    Most “spammers” would never attempt such a campaign because they would view it as being far too spammy. They would have a zero percent chance of recovery as Google effectively deletes their site from the web.

    And while RG is quick to distance itself from scraper sites, for almost the entirety of their history virtually none of the lyrics posted on their site were even licensed.

    Rap Genius was recently targeted by the National Music Publishers Association for the unlicensed publication of lyrics.

    Robert Ramirez, a senior SEO analyst at big name firm Bruce Clay, Inc., wrote on Google+, “I work with small to mid-sized businesses that suffer from manual penalties all the time, and I have NEVER seen such a fast recovery. What made Rap Genius special? The $15 million in VC funding they recently raised? Their high profile?”

    He added, “Other businesses who find themselves penalized are DEVASTATED by these actions, they very often had no idea what they were doing to themselves when they paid an ‘SEO’ to build them links, but the loss of revenue, loss of jobs, loss of livelihood is VERY REAL, no less real or urgent than Rap Genius’ situation.”

    “I am working with two clients right now that are under manual penalties,” he wrote. “Their reconsideration requests were submitted through Google Webmaster Tools in mid-December, and to date there is no answer. I’ve seen reconsideration requests take longer than a month to be responded to. These are real businesses with quality websites that serve a real purpose and offer visitors a quality experience, their only mistake, they paid someone to build them links in the past because they were lead to believe that’s how ‘SEO’ was done.”

    Dan Rippon commented, “Obviously Google wanted to make a spectacle of the issue to highlight their anti-spam cause, but considering the plight of ‘Mom & Pop’ businesses faced with similar problems the level of double standards is pretty impressive.”

    A few more comments from the Twitterverse:

    Richard Hearne at Red Cardinal writes what many are probably thinking: “What’s especially disturbing about this is that sites which used directory and article links are still punished by Penguin 2 years later, but these guys who used the spammiest of tactics get off after 10 days.”

    Even music tech blog Hypebot is critical of the whole thing, saying, “This game really is fixed.”

    Some think that perhaps Google just made this move because Rap Genius actually has better content, and it wants to serve the best search results. Josh Constine at TechCrunch, for example, says Google is “favoring smart results over holding a grudge,” and that “Google apparently cares more about giving the best search results than punishing spammers.”

    Rap Genius put out a blog post detailing the steps it took to get back into Google’s good graces, which included contacting webmasters about link removals, and writing a scraper to gather URLs to have disavowed. They also used the post to promote an upcoming iOS app, which brings up another interesting point.

    As Wall noted in his post, Rap Genius has managed to get all kinds of promotion out of this whole ordeal in addition to getting its Google rankings back. In additionto all the mentions of its name, it’s getting a lot of high value links simply from being covered in the media. In other words, it’s entirely possible that Rap Genius will come out of this whole thing in better shape than ever.

    One thing Rap Genius does have going for it is verified accounts from many of the biggest names in rap (not to mention people like Sheryl Sandberg). That means trust and authority, and these are things that Google cares about a lot.

    Google itself hasn’t had a whole lot to say about the story.

    What do you think? Is Google unfairly helping Rap Genius or does the site deserve the rankings its getting? Let us know in the comments.

    Image: Rap Genius

  • Cutts On What You Might Not Have Noticed About Google’s ‘How Search Works’ Site

    Google has put out a new Webmaster Help video featuring Matt Cutts talking about some things “you might not have noticed” about the company’s How Search Works site.

    Google launched this site a little less than a year ago. It’s part of the company’s Inside Search site, and includes visuals about search, a view into major search algorithms and features, a big document about guidelines for search raters, a slideshow about spam removal, graphs about spam and policies that explain when Google will remove content.

    One thing he mentions is where it says how many searches Google has handled in the time that you’ve spent on the page. He then talks about videos that appear on the site that talk about how Google rates search quality.

    He talks about different elements of the site for about ten minutes, so if you haven’t spent much time perusing it, you may want to listen to what he has to say. You might find something that captures your interest.

    Believe it or not, he likes the spam section the most.

  • Rap Genius Climbs Out Of Google Penalty Box

    As you may have heard, lyrics site Rap Genius was hit with a big Google penalty around Christmas time for unnatural linking practices.

    The site was telling people they could be a “blog affiliate” and get “MASSIVE traffic” when they posted Rap Genius links to Justin Bieber lyrics on their sites in exchange for tweets to the posts.

    Obviously, you can’t do that. Not for PageRank passing links anyway, if you want to remain in Google’s good graces.

    Google caught wind, and took action.

    Rap Genius remained out of Google rankings for about ten days, but it’s now back. Apparently Google views the site in a positive enough manner (in terms of quality) that the penalty has been lifted, with Rap Genius having cleaned up its act.

    “First of all, we owe a big thanks to Google for being fair and transparent and allowing us back onto their results pages,” the Rap Genius founders said in a blog post. “We overstepped, and we deserved to get smacked.”

    Rap Genius offers a history of its link-building practices here.

    Image: Rap Genius

  • Rand Fishkin On The Best And Worst Parts Of Being Moz CEO

    Last month, we learned that Moz (formerly SEOmoz) CEO Rand Fishkin is stepping down from the role. He revealed that he would be handing the reins over to President and COO Sarah Bird, while taking on less of a people management role, and instead focusing more on his product and marketing passions with the company.

    We reached out to Fishkin for some more about his decision and the pending transition. He told us about what he liked and disliked about being a CEO, as well as his regrets about holding the position.

    On what he enjoyed most, Fishkin told WebProNews, “The ability to create and influence the company culture, product, team, and mission have certainly been the best parts. I’m hopeful that the ‘influence’ parts will continue for a long time to come in this new role.”

    On what he enjoyed the least, he said, “Over time, it’s been a lot of the organizational development, conflict resolution, and people management issues. Those seem, to me, to be less about how to make a great product, market it, improve it, and deliver value to customer and more about politics, which I wish didn’t exist. The bigger a company gets, the harder all that stuff is, and the better you have to be at it in order to have success doing all the customer-value-add stuff.”

    “I also don’t really enjoy interacting with financial folks outside of Moz,” he added.

    Fishkin had plenty of nice things to say about Bird in his announcement and in an email he sent to Moz staff. He told WebProNews, “Sarah is far more capable of possessing and projecting optimism to the team, more emotionally and culturally well-suited to the people challenges at scale, and she’s not as easily overwhelmed by non-productive emotions as I am (which is something we definitely need).”

    When asked if he has any regrets about being CEO, Fishkin told us, “Absolutely. I think I’ve made numerous terrible decisions as CEO.”

    “That said,” he added. “It’s also been a remarkable run for the company – we’ve built something really amazing culturally, product-wise, and with the Moz brand, and I’m hopeful that long term, we’ll achieve the mission we’ve set for ourselves and help hundreds of thousands of SEO-focused marketers to do their job better.”

    After sharing his plans, Fishkin wrote a blog post titled, “Can’t Sleep; Caught in the Loop,” in which he talked about his worst weeks of 2013 in which he had what he described as a “weird mental cycle,” which has kept him awake. He calls it ‘the loop.”

    “Moz’s performance this year (which wasn’t great, but was still fairly good, ~25% growth) isn’t directly connected to Sarah taking the leadership role, but it does have an indirect impact,” Fishkin told us. “I think the people challenges at our scale, combined with some of the tough decisions that didn’t pan out created a lot of cycling negativity in my head that I’ve referred to as ‘The Loop.’ That negativity and the emotional impact it’s had on me, and by extension, Moz, are certainly part of the reason I wanted to make this move.”

    “That said, there are others, too,” he added. “I think Sarah will make an excellent CEO long term, and I want to focus more on individual contributor types of work. I also want to put my energy into things I love (like product & marketing) rather than those I don’t, but felt obligated to do (like people issues).”

    Fishkin and Bird recently spoke with the Moz board, and determined that the move will be made in mid-January, when they’ll be moving to a new office.

    Image: Rand Fishkin

  • Google Has A Lot To Say About Duplicate Content These Days

    Duplicate content has been an issue in search engine optimization for many years now, yet there is still a lot of confusion around what you can and can’t do with it, in terms of staying on Google’s good side.

    In fact, even in 2013, Googles’ head of webspam Matt Cutts has had to discuss the issue in several of his regular Webmaster Help videos because people keep asking questions and looking for clarification.

    Do you believe your site has been negatively impacted by duplicate content issues in the past? If so, what were the circumstances? Let us know in the comments.

    Back in the summer, Cutts talked about duplicate content with regards to disclaimers and Terms and Conditions pages.

    “The answer is, I wouldn’t stress about this unless the content that you have is duplicated as spammy or keyword stuffing or something like that, you know, then we might be – an algorithm or a person might take action on – but if it’s legal boiler plate that’s sort of required to be there, we might, at most, might not want to count that, but it’s probably not going to cause you a big issue,” Cutts said at the time.

    “We do understand that lots of different places across the web do need to have various disclaimers, legal information, terms and conditions, that sort of stuff, and so it’s the sort of thing where if we were to not rank that stuff well, then that would probably hurt our overall search quality, so I wouldn’t stress about it,” he said.

    The subject of duplicate content came up again in September, when Cutts took on a question about e-commerce sites that sell products with “ingredients lists” exactly like other sites selling the same product.

    Cutts said, “Let’s consider an ingredients list, which is like food, and you’re listing the ingredients in that food and ingredients like, okay, it’s a product that a lot of affiliates have an affiliate feed for, and you’re just going to display that. If you’re listing something that’s vital, so you’ve got ingredients in food or something like that – specifications that are 18 pages long, but are short specifications, that probably wouldn’t get you into too much of an issue. However, if you just have an affiliate feed, and you have the exact same paragraph or two or three of text that everybody else on the web has, that probably would be more problematic.”

    “So what’s the difference between them?” he continued. “Well, hopefully an ingredients list, as you’re describing it as far as the number of components or something probably relatively small – hopefully you’ve got a different page from all the other affiliates in the world, and hopefully you have some original content – something that distinguishes you from the fly-by-night sites that just say, ‘Okay, here’s a product. I got the feed and I’m gonna put these two paragraphs of text that everybody else has.’ If that’s the only value add you have then you should ask yourself, ‘Why should my site rank higher than all these hundreds of other sites when they have the exact same content as well?’”

    He went on to note that if the majority of your content is the same content that appears everywhere else, and there’s nothing else to say, that’s probably something you should avoid.

    It all comes down to whether or not there’s added value, which is something Google has pretty much always stood by, and is reaffirmed in a newer video.

    Cutts took on the subject once again this week. This time, it was in response to this question:

    How does Google handle duplicate content and what negative effects can it have on rankings from an SEO perspective?

    “It’s important to realize that if you look at content on the web, something like 25 or 30 percent of all of the web’s content is duplicate content,” said Cutts. “There’s man page for Linux, you know, all those sorts of things. So duplicate content does happen. People will quote a paragraph of a blog, and then link to the blog. That sort of thing. So it’s not the case that every single time there’s duplicate content, it’s spam. If we made that assumption, the changes that happened as a result would end up, probably, hurting our search quality rather than helping our search quality.”

    “So the fact is, Google looks for duplicate content and where we can find it, we often try to group it all together and treat it as if it’s one piece of content,” he continued. “So most of the time, suppose we’re starting to return a set of search results, and we’ve got two pages that are actually kind of identical. Typically we would say, “Ok, you know what? Rather than show both of those pages (since they’re duplicates) let’s just show one of those pages, and we’ll crowd the other result out.’ And if you get to the bottom of the search results, and you really want to do an exhaustive search, you can change the filtering so that you can say, okay, I want to see every singe page, and then you’d see that other page.”

    But for the most part, duplicate content is not really treated as spam.,” he said. “It’s just treated as something that we need to cluster appropriately. We need to make sure that it ranks correctly, but duplicate content does happen. Now, that said, it’s certainly the case that if you do nothing but duplicate content, and you’re doing in in abusive, deceptive or malicious or a manipulative way, we do reserve the right to take action on spam.”

    He mentions that someone on Twitter was asking how to do an RSS autoblog to a blog site, and not have that be viewed as spam.

    “The problem is that if you are automatically generating stuff that’s coming from nothing but an RSS feed, you’re not adding a lot of value,” said Cutts. “So that duplicate content might be a little more likely to be viewed as spam. But if you’re just making a regular website, and you’re worried about whether you have something on the .com and the .co.uk, or you might have two versions of your Terms and Conditions – an older version and a newer version – or something like that. That sort of duplicate content happens all the time on the web, and I really wouldn’t get stressed out about the notion that you might have a little bit of duplicate content. As long as you’re not trying to massively copy for every city and every state in the entire United States, show the same boiler plate text….for the most part, you should be in very good shape, and not really have to worry about it.”

    In case you’re wondering, quoting is not considered duplicate content in Google’s eyes. Cutts spoke on that late last year. As long as you’re just quoting, using an excerpt from something, and linking to the original source in a fair use kind of way, you should be fine. Doing this with entire articles (which happens all the time) is of course a different story.

    Google, as you know, designs its algorithms to abide by its quality guidelines, and duplicate content is part of that, so this is something you’re always going to have to consider. It says right in the guidelines, “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.”

    They do, however, offer steps you can take to address any duplicate content issues that you do have. These include using 301s, being consistent, using top-level domains, syndicating “carefully,” using Webmaster Tools to tell Google how you prefer your site to be indexed, minimizing boilerplate repetition, avoiding publishing stubs (empty pages, placeholders), understanding your conteent management system and minimizing similar content.

    Google advises blocking it from indexing duplicate content though, so think about that too. This is because it won’t be able to detect when URLs point to the same content, and will have to treat them as separate pages. Use the canonical link element.

    Have you been affected by how Google handles duplicate content in any way? Please share.

  • Google Says It’s Now Working To ‘Promote Good Guys’

    Google’s Matt Cutts says Google is “now doing work on how to promote good guys.”

    More specifically, Google is working on changes to its algorithm that will make it better at promoting content from people who it considers authoritative on certain subjects.

    You may recall earlier this year when Cutts put out the following video talking about things Google would be working on this year.

    In that, he said, “We have also been working on a lot of ways to help regular webmasters. We’re doing a better job of detecting when someone is more of an authority on a specific space. You know, it could be medical. It could be travel. Whatever. And try to make sure that those rank a little more highly if you’re some sort of authority or a site, according to the algorithms, we think might be a little more appropriate for users.”

    Apparently that’s something Google is working on right now.

    Cutts appeared in a “This Week In Google” video (via Search Engine Land/Transcript via Craig Moore) in which he said:

    We have been working on a lot of different stuff. We are actually now doing work on how to promote good guys. So if you are an authority in a space, if you search for podcasts, you want to return something like Twit.tv. So we are trying to figure out who are the authorities in the individual little topic areas and then how do we make sure those sites show up, for medical, or shopping or travel or any one of thousands of other topics. That is to be done algorithmically not by humans … So page rank is sort of this global importance. The New York times is important so if they link to you then you must also be important. But you can start to drill down in individual topic areas and say okay if Jeff Jarvis (Prof of journalism) links to me he is an expert in journalism and so therefore I might be a little bit more relevant in the journalistic field. We’re trying to measure those kinds of topics. Because you know you really want to listen to the experts in each area if you can.

    For quite a while now, authorship has given Google an important signal about individuals as they relate to the content they’re putting out. Interestingly, Google is scaling authorship back a bit.

    Image: YouTube

  • Google: Your Various ccTLDs Will Probably Be Fine From The Same IP Address

    Ever wondered if Google would mind if you had multiple ccTLD sites hosted from a single IP address? If you’re afraid they might not take kindly to that, you’re in for some good news. It’s not really that big a deal.

    Google’s Matt Cutts may have just saved you some time and money with this one. He takes on the following submitted question in the latest Webmaster Help video:

    For one customer we have about a dozen individual websites for different countries and languages, with different TLDs under one IP number. Is this okay for Google or do you prefer one IP number per country TLD?

    “In an ideal world, it would be wonderful if you could have, for every different .co.uk, .com, .fr, .de, if you could have a different, separate IP address for each one of those, and have them each placed in the UK, or France, or Germany, or something like that,” says Cutts. “But in general, the main thing is, as long as you have different country code top level domains, we are able to distinguish between them. So it’s definitely not the end of the world if you need to put them all on one IP address. We do take the top-level domain as a very strong indicator.”

    “So if it’s something where it’s a lot of money or it’s a lot of hassle to set that sort of thing up, I wouldn’t worry about it that much,” he adds. “Instead, I’d just go ahead and say, ‘You know what? I’m gonna go ahead and have all of these domains on one IP address, and just let the top-level domain give the hint about what country it’s in. I think it should work pretty well either way.”

    While on the subject, you might want to listen to what Cutts had to say about location and ccTLDs earlier this year in another video.