WebProNews

Tag: SEO

  • Matt Cutts On How To Get Google To Recognize Your Mobile Pages

    Google has a new “Webmaster Help” video out. This time Matt Cutts discusses optimizing for the mobile web. Specifically, he takes on this submitted question:

    Is there a way to tell Google there is a mobile version of a page, so it can show the alternate page in mobile search results? Or similarly, that a page is responsive and the same URL works on mobile?

    Cutts says this is a very popular question. Google has plenty of information on the subject out there already, but obviously people still aren’t quite grasping it.

    “Don’t block javascript and CSS. That actually makes a pretty big difference,” he says. “If we’re able to fetch the javascript and CSS we can basically try to figure out whether it’s responsive design on our side. So my advice – and I’m going to keep hitting this over and over and over again – is never block javascript and CSS. Go ahead and let Googlebot fetch those, interpret those, figure out whether a site is responsive, and do all sorts of other ways like executing or rendering javascript to find new links, and being able to crawl your website better.”

    “The other way to do it is to have one version of the page for desktop and another version of the page for regular mobile smartphone users, and to have separate URLs,” he continues. “So how do you handle that case correctly? Well, first off, you want to make sure that on your desktop page, you do a rel-alternate that points to the smartphone version of your page. That basically let’s Google know, ‘Yes, these two versions of the same page are related to each other because this is the smartphone version, and this is the desktop version.’ Likewise, on the smartphone version, you want to do a rel=canonical to the desktop version. What that does is it tells Googlebot, ‘Hey, even though this is its own separate URL – while the content is the same – it should really be glommed together with the desktop version.’ And so as long as you have those bi-directional links (a rel-alternate pointing from the desktop to the smartphone and a rel=canonical pointing from the smartphone to the desktop) then Google will be able to suss out the difference between those, and be able to return the correct version to the correct user.”

    “Now there’s one other thing to bear in mind, which is that you can also just make sure that you redirect any smartphone agents from the desktop version to the smartphone version,” Cutts adds. “So we look for that. If we crawl with Googlebot Mobile, and we see that we get redirected to a separate URL then we start to interpret, and say, ‘Ah, it looks like most people are doing that – where they have a desktop version of the page, a smartphone user agent comes in, and they get redirected to a different URL.’ Of course, the thing to bear in mind is just like earlier – we said not to block javascript and CSS – one common mistake that we see is blocking Googlebot Mobile or blocking Googlebot whenever it tries to fetch the smartphone version of the page. Don’t block Googlebot in either case. Just make sure that you return the correct things, and treat Googlebot Mobile like you would treat s smartphone user agent, and treat Googlebot (regular) just like you would treat a desktop user.”

    As long as you follow these best practices, he says, Google will figure it out.

    The video page points to this page on building smartphone-optimized websites, which includes an overview of Google’s recommendations, common mistakes, and more info about various elements of the subject.

    Image via YouTube

  • Google Says Knowledge Graph Is The ‘Swiss Army Knife’ To Your Site’s Corkscrew

    With the Knowledge Graph, Google is trying to be the “Swiss Army Knife” to publishers’ corkscrews. That is according to Google SVP and software engineer Amit Singhal, who spoke at SMX West earlier this week.

    Do you think this is a good analogy? Has your traffic suffered from Google putting its own content on search results pages? Let us know in the comments.

    Since Google launched the Knowledge Graph, and more so as it has continued building it to encompass more types of queries, publishers have wondered what it means for the future of getting traffic from Google. After all, if Google is giving users what they’re looking for right on the search results page, why would they need to click over to your site?

    In addition to the Knowledge Graph, in some cases, Google is even going so far as to put sponsored results for its own products above sites that people are specifically searching for. We’re talking branded searches in which Google forces its own product above the actual brand being searched for.

    Google’s Matt Cutts recently announced a new tool for users to report scrapers who are outranking original source content. In response, Dan Barker tweeted this back at him, illustrating an example of how Google gives an answer to a question making it so the user doesn’t need to click over to another site (in this case, WIkipedia).

    That got nearly 35,000 retweets and 4,000 favorites, which is quite a lot for an SEO-related tweet. Clearly this resonated with people.

    Search Engine Land, sister site to SMX, has a liveblog of Singhal’s keynote (which was an on-stage interview with Danny Sullivan). Sullivan brought up the tweet, and asked him about the Knowledge Graph and its effect on publishers.

    The liveblogged account of Singhal’s words (some of which was paraphrased) says:

    If you look at a search engine, the best analogy is that it’s an amazing Swiss Army Knife. It’s great, but sometimes you need to open a wine bottle. Some genius added that to the knife. That’s awesome. That’s how we think of the Knowledge Graph. Sometimes you only need an answer.

    The world has gone mobile. In a mobile world, there are times when you cannot read 20 pages, but you need something — an extra tool on your Swiss Army Knife. When you build a better tool, you use it more.

    Note: According to Barry Schwartz, Singhal specifically referred to publishers as “corkscrews” and “screwdrivers”.

    “Personally, I kept finding it funny Amit using the ‘screw’ drive[r] and cork ‘screw’ association to publishers,” Schwartz blogged. “Yea, publishers do feel ‘screwed’ and him using those words didn’t help. But his analogy, while it stinks, is true.”

    Back to Search Engine Land’s liveblogged account. On getting the balance right in terms of using others’ content…

    It’s a great question and we think about it all the time. We built Google to fulfill user’s needs. Somewhere along the way, people started debating if web traffic is more than users. But keep in mind that we need to keep our user’s trust. We’re part of an open web system. If we lose our user’s trust, the open web would lose its strongest ally (sorry readers, I’m paraphrasing here). If people stop trusting us, then a sinking tide sinks us all.

    We deeply care about this. I’ve been in this field for 20 years. The relationship between publishers, Google and users is all one of mutual benefit. We work hard at getting that balance right. You guys (the audience) have been great contributors to the web. The world is changing, and SEO is all about change. Users dictate how the world changes. We are changing so that our users get a lovely product, and publishers get access to our users. (paraphrase again)

    I don’t know if any of this is going to make publishers feel better about the direction Google is heading in, but it’s pretty consistent with the things Google has said in the past.

    Another problem with the Knowledge Graph, which wasn’t discussed in the keynote, apparently, is that it often shows erroneous information. Sometimes for businesses.

    The Knowledge Graph is definitely useful to searchers looking for quick answers, but how much users can really rely on it for accuracy is debatable.

    What do you think? Let us know in the comments.

    Image via Wikimedia Commons

  • Google Thinks You Don’t Want To Know About Its Algorithm Changes

    Google, for a little while, used to be more transparent about the changes it made to its algorithm. Then it became much less transparent, and is now saying that people were simply too “bored” to want to know about such changes.

    Do you believe that’s really why Google has become less transparent about changes? Were you bored of hearing about what Google was doing to its algorithm? Let us know in the comments.

    In December of 2011, Google announced what it described as a “monthly series on algorithm changes” on its Inside Search blog. Google started posting monthly lists of what it referred to as “search quality highlights”. These provided perhaps the most transparency into how Google changes its algorithm that the company has ever provided.

    The lists weren’t exactly a complete look at Google’s secret ranking sauce, but it did give those interested plenty of insight into the kinds of changes Google was making from month to month. Some were big, and some were small.

    If nothing else, they gave you a general sense of the kinds of areas Google was looking at during a particular time period. For example, there was a period when many of the specific changes Google was making were directly related to how it handles synonyms.

    Google described the lists as an attempt to “push the envelope when it comes to transparency.” They started off delivering the lists one a month as promised. Eventually, they started coming out much more slowly. For a while, they came out every other month, with multiple lists at a time. Then, they just stopped coming.

    It’s been roughly a year and a half since Google released one of these “transparency” lists. The last one was on October 4th of 2012.

    Google never bothered to explain why it stopped putting out the lists, though I reached out for comment on the matter multiple times. That is until now.

    AJ Kohn mentioned on Twitter (hat tip to Search Engine Roundtable) that “a Google test change log would save countless inane conversations and blog posts.”

    To that, Cutts responded, “Except we did just that for a year, blogging all the changes we released. Eventually the world got bored.”

    Did it?

    Kohn and Barry Scwhartz – two of the more well-known search bloggers – both said they were not bored. Matt Dimock said it’s not true, and that he found the updates “very useful”. Others chimed in to express similar sentiments.

    I know I was so bored with it that I blogged about every single list (usually with multiple articles on different changes), and multiple times about how they stopped putting the lists out, only to be completely ignored when I asked about it (and Google typically responds to my requests for comment, though they still haven’t answered for the screwed up YouTube embed code yet either. It’s still screwed up, by the way.).

    Moz’s Keri Morgret asked Cutts if he would blog it again if Moz promised to have Rand Fishkin retweet every post. Apparently Moz, one of the most respected entities in search, wasn’t bored either.

    Cutts made no indication that the lists would be back, though there is clearly interest in them. It’s nice that someone at Google at least finally acknowledged the lists at all.

    Either way, apparently everyone finds transparency boring. Right.

    Do you think the “world was bored” with knowing about changes Google made to its algorithm? Would you like to see Google bring back the monthly lists? Share your thoughts in the comments.

    Image via YouTube

  • Google Expands Link Network Crackdown Across Europe

    Google has been cracking down on link networks like never before in recent months. That continues with a couple more announcements from head of webspam Matt Cutts.

    In January, Cutts said Google was taking action on French link network Buzzea. He also noted that Germany was on the list of places Google was looking at. Then, in February, Google put out its German blog post warning about paid links.

    Later in February, Cutts told told Twitter followers that Google was not done with Germany, and also that it had just taken action on two Polish link networks.

    On Monday, Cutts tweeted about the latest areas of focus:

    Cutts has not always been shy about calling out specific link networks, though this time (and others) he did not name names. At least not yet.

    Image via YouTube

  • Matt Cutts Was ‘Trying To Decide How Sassy To Be When Answering This Question’

    Google has put out a new “Webmaster Help video” with the title “Can sites do well without using spammy techniques?”

    That’s a bit rephrased from the actual question that prompted the response:

    Matt, Does (sic) the good guys still stand a chance? We’re a small company that hired an SEO firm that we thought was legit, but destroyed our rankings w/ spam backlinks. We’ve tried everything but nothing helps. What can a company with good intentions do?

    Cutts begins, “We were trying to decide how sassy to be when answering this question, because essentially you’re saying, ‘Do the good guys stand a chance? We spammed, and we got caught, and so we’re not ranking,’ and if you take a step back – if you were to look at that from someone else’s perspective, you might consider yourself a good guy, but you spammed, so the other people might consider you a bad guy, and so the fact that you got caught meant, hey, other good guys who didn’t spam can rank higher. So from their perspective things would be going well. So you’re kind of tying those two together (Do the good guys stand a chance? and We spammed.)”

    Well, technically Matt’s right. They spammed, but it sounds like this person got screwed over by who they hired and paid the price for it. That might be their own fault (if that’s even really what happened), but that does make the story a little more complex than if they did their own spamming and acted like they were still “the good guys”. After all, this has even happened to Google before.

    Cutts continues, “So I think the good guys do stand a chance, and we try hard to make sure that the good guys stand a chance giving them good information, trying to make sure that they can get good information in Webmaster Tools, resources, all sorts of free information and things they can do, and lots of advice. But the good guys stand a chance if they don’t spam, right (laughs)? My advice is you might have to go through a difficult process or reconsideration requests and disavowing links, and whatever it is you need to do (getting links off the web) to clean things up.”

    “I absolutely believe good guys can stand a chance, and small sites and small businesses can stand a chance,” he says. “But I think (this is May 2013) that by the time you get to the end of the summer, I think more and more people will be saying, ‘Okay you stand a chance, but don’t start out going to the black hat forums, trying to spam, trying to do high jinks and tricks because that sort of technique is going to be less likely to work going forward.”

    He notes that it can be harder and take longer to get good links, but that those links will likely stand the test of time.

    “Good luck,” he tells the person who submitted the question. “I hope you are able to get out of your existing predicament, and in the future please tell people before you sign up with an SEO, ask for references, do some research, ask them to explain exactly what they’re going to do. If they tell you they know me, and they have a secret in with the webspam team, you should scream and run away immediately.”

    I wonder how often that works.

    “If they will tell you what they’re doing in clear terms, and it makes sense, and it doesn’t make you feel a little uneasy then that’s a much better sign,” he says.

    Image via YouTube

  • Moz Partners With Bitly For Click Tracking And Link Data

    Bitly announced today that it will provide click tracking technology and inbound link data to Moz to help users better understand who is linking to websites and how relevant those links are.

    The data will use number and frequency of clicks to determine relevancy.

    Moz had been using Twitter link data to rank relevance, but Bitly’s will utilize Twitter as well as Facebook, Google+, blogs, and other sources.

    “The Bitly click dataset is hands down the broadest and most authoritative available to anyone looking for information on how their content and brand is performing across the web,’ said Moz co-founder and former CEO Rand Fishkin. “Marketers armed with these insights are able to build campaigns that are designed to optimize attention through content.”

    “Previously we were using just Twitter data to understand the relevance of shared content,” he added. “While that’s a great start, our clients are looking for a holistic view. Bitly’s click data gives us a much more comprehensive and accurate picture by looking at the entire web and drilling into actual clicks, which is more valuable than simply looking at how frequently content is shared.”

    Bitly CEO Mark Josephson said, “Bitly owns a unique view of how links are shared across the internet. Insights gleaned from our differentiated data set can help all marketers make better decisions. We’re excited to put this into action with Moz so their clients can better understand how content and links are shared across the Web.”

    According to the company, marketers can identify recently created URLs and links within seconds, and highlight the most clicked content for effective campaign management.

    Image via Moz

  • Now There’s A Matt Cutts Whack-A-Mole Game

    Now There’s A Matt Cutts Whack-A-Mole Game

    So remember that Matt Cutts Donkey Kong game from a few weeks ago? That has inspired a new Matt Cutts Whack-A-Mole game, in which (you guessed it) you get to whack Matt Cutts with a gavel over and over again.

    A number of fun quotes from Cutts play in the background while you whack.

    This one comes from LoveMyVouchers, which says on its site:

    Inspired by Donkey Cutts, which was recently created by NetVoucherCodes, we decided to have a go at making our own game in order to help out frustrated webmasters and SEOs everywhere. As Matt Cutts himself said recently in an interview, “You never want to play whack a mole with a spammer”, and this got us thinking that there must be many people out there who would like to play whack a mole with him.

    What with the constant influx of updates from Google these days and a constantly shifting set of rules to play by, running and promoting your website is becoming more and more difficult. However, this light-hearted flash game should help to reduce your stress levels, if only for a minute or two.

    Matt even played it himself:

    Have fun.

    Personally, I’m waiting for someone to take dino-Cutts and make a Rampage-style game.

    Image via LoveMyVouchers

  • Google’s ‘Completely Clear’ Stance On Disavowing ‘Irrelevant’ Links

    We knew that when Google launched the Disavow Links tool, people were going to use it more than they should, even though Google made it clear that most people shouldn’t use it at all.

    A person doing some SEO work posted a question in the Google Webmaster Central product forum (via Search Engine Roundtable) that many others have probably wondered: Should I use the disavow tool for irrelevant links?

    In other words, should you tell Google to ignore links from sites that aren’t related to yours? The answer is no.

    Google’s own John Mueller jumped in to say this: “Just to be completely clear on this: you do not need to disavow links that are from sites on other topics. This tool is really only meant for situations where there are problematic, unnatural, PageRank-passing links that you can’t have removed.”

    Google updates and manual action penalties have caused a lot of webmasters to re-evaluate their link profiles. Many have scrambled to get various links to their sties taken down, often going overboard (or even way overboard).

    For the record, Google still views backlinks as “a really, really big win in terms of quality for search results.”

    In other “how Google views links” news, Matt Cutts just put out an 8-minute video about how Google determines whether your links are paid or not.

    Image via Google.com

  • Here’s How Google Determines Whether Or Not Your Links Are ‘Paid’

    Here’s How Google Determines Whether Or Not Your Links Are ‘Paid’

    Google put out a new 8-minute video about paid links. Matt Cutts talks about the various things that the search engine takes into consideration when determining whether or not links are “paid”.

    99 percent of the time it’s abundantly clear, he says. Sometimes, not so much.

    He notes, “These are some of the criteria, but just like the webspam guidelines, they basically say, ‘Look, anything that’s deceptive or manipulative or abusive we reserve the right to take action on.’ It’s the same sort of thing. If we see a new technique that people are trying to exploit people’s trust or something like that, we’re willing to take action on that as well.”

    He points out that the FTC and other government agencies have guidelines about disclosure, and that Google’s thinking is pretty much aligned with these.

    One thing Google takes into consideration is the actual value of what someone is getting for something.

    “If you go to a conference, and you pick up a free t-shirt that’s probably pretty low-quality, that’s probably not going to change how you behave, right?” Cutts says. “That’s not going to change your behavior. On the other hand, if someone pays you outright $600 to link to you, that is clearly a lot of value. So on the spectrum of a pen and a t-shirt all the way up to something of great value, that’s one of the criteria that we use. Another good one is how close is something to money? So again, the vast majority of the time, people are actually giving you money. Sometimes people might say something like, ‘Hey, I’d like to send you a gift card. Gift cards are pretty fungible. You can convert those to money and back and forth not too easily. On the other hand, something like ‘I’m going to give you a free trial of perfume’ or ‘I’m going to buy you a beer’ or something like that,” that’s less of a connection. But we do look at how close something is to actual money whenever we’re looking at those kinds of things.”

    “If someone goes and buys you dinner, and you write a blog post four months later, and the dinner wasn’t some huge steak dinner with eighteen courses or something like that, that’s probably not the sort of thing that we would worry about.”

    I’d be curious to know how they’d go about figuring that out anyway. It’s unclear exactly how many courses it takes.

    “Another criterion that we use is whether something is a gift or a loan,” he continues. “So imagine, for example, that somebody loaned out a car for someone to try out for a week versus giving them a car. There’s a big difference there because if you’re loaned a car for a week you still have to maintain the insurance on your car, you still have to make sure you have a place to store it , whereas if someone gives you a new car, that is something of a completely different nature. So if somebody’s giving you a review copy, and you have to return it, that’s a relatively well-respected thing where people understand, “Okay, I’m trying this out. I’m a gadget reviewer or whatever, and I get to see whether I like this camera or whatever, but I do have to send it back. Whereas if someone sends you a camera, and says, ‘Oh, you know what? Just keep it,” that’s going to be something that’s much closer to material compensation in our opinion.”

    “We also look at the intended audience, and it can be hard to judge intent, but bear in mind, the vast majority of the time, the intent is crystal clear when someone’s giving you actual money to buy links, but take for example, suppose someone went to a Salesforce conference. You know, so they’re at Dreamforce, and they represent a nonprofit, and so they manage to say, ‘Okay, I’m a nonprofit, and I like to try out your service,’ and so at Salesforce conference or Dreamforce, they got a year’s free use of the service. Now, the intent there was not to get someone to embed paid links within an editorial blog post. The intent was to try to sign somebody up, see how they liked it….they can be someone who could tell other people about it…maybe it’s a subscription or a trial where they get six months free, and then after that they either have to convert or start paying money or something along those lines. That is something where the intent is not trying to get links for SEO value. It’s so that people can try it out.”

    Cutts then goes on to compare such a scenario to Google giving out gadgets at Google I/O – something critics have often pointed out when this discussion comes up.

    “Another thing to consider is whether or not it would be a surprise,” he says. “So if you’re a movie reviewer, it’s not a surprise that somebody probably lets you into a theater, and maybe you watch the movie for free. That’s not something that’s going to be a surprise. If it was a reporter for a tech blog, and they said, ‘Give me a laptop, and I get to keep it,’ that would be a surprise…and it would be something that was not reviewing the laptop. Just like, “I’d write about your startup if you give me a laptop.’ That would be the sort of thing that really should be disclosed.”

    In the end, you probably know if what you’re doing is wrong, but it’s really also about whether Google perceives what you’re doing to be wrong. Hopefully, this will give you a better idea of what to expect on their end.

  • Can Google Solve The Scraper Problem?

    Google has a form called the Scraper Report for people to report when they see a scraper site ranking ahead of the original content that it’s scraping.

    The idea is that you can let Google know about people stealing your content and your rankings, and hopefully get the situation rectified.

    Do you ever see scraper sites ranking above your own original content? Do you think this form will help Google solve the problem? Let us know what you think in the comments.

    Head of webspam Matt Cutts tweeted:

    The form asks for the URL on the site where the content was taken from, the exact URL on the scraper site, and the Google search result URL that demonstrates the problem.

    It then asks you to confirm that your site is following Google Webmaster Guidelines and is not affected by any manual actions. You confirm this with a checkbox.

    Scraper Report

    Danny Sullivan asks a good question:

    No answer on that so far, though Sullivan suggests in an article that Google will “potentially” use it as a way to improve its ranking system.

    Google actually put out something similar a few years back . That one Said, “Report scraper pages…Google is testing algorithmic changes for scraper sites (especially blog scrapers). We are asking for examples, and may use data you submit to test and improve our algorithms.”

    The new one is much more vague on what Google intends to do with the information it obtains. Obviously the old one didn’t make a big enough difference.

    The reactions to Matt’s tweet have been interesting. One would like to see a similar tool for images.

    One response in particular has gone viral:

    As of the time of this writing, it’s got 14,455 retweets and 10,982 favorites. Certainly more than the average reply to a Matt Cutts tweet. The tweet is even getting coverage from publications like Mashable and Business Insider. Cutts has so far not responded.

    “Google’s efforts to thwart Internet copycats known as “scrapers” have backfired,” writes Mashable’s Jason Abbruzzese. “What started out with the best intentions has become Friday’s Internet joke du jour after Google was caught using its own scraper to mine content.”

    He goes on to note the obvious in that it “highlights the tension between the company’s goal of providing quick answers and its role as a portal to the rest of the Internet.”

    Sometimes Google’s own “scraping” doesn’t even “scrape” the right information.

    It remains to be seen whether Google’s new form will significantly solve the problem of scrapers appearing over original content, but I don’t think it will do anything to keep Google from putting its own “answers” for users’ searches – right or wrong.

    Do you think Google’s efforts will improve its search results? Share your thoughts.

  • Google: You Don’t Have To Dumb Your Content Down ‘That Much’

    Google’s Matt Cutts answers an interesting question in a new “Webmaster Help” video: “Should I write content that is easier to read or more scientific? Will I rank better if I write for 6th graders?”

    Do you think Google should give higher rankings to content that is as well-researched as possible, or content that is easier for the layman to understand? Share your thoughts in the comments.

    This is a great question as we begin year three of the post-Panda Google.

    “This is a really interesting question,” says Cutts. “I spent a lot more time thinking about it than I did a lot of other questions today. I really feel like the clarity of what you write matters a lot.”

    He says, “I don’t know if you guys have ever had this happen, but you land on Wikipedia, and you’re searching for information – background information – about a topic, and it’s way too technical. It uses all the scientific terms or it’s talking about a muscle or whatever, and it’s really hyper-scientific, but it’s not all that understandable, and so you see this sort of revival of people who are interested in things like ‘Explain it to me like I’m a five-year-old,’ right? And you don’t have to dumb it down that much, but if you are erring on the side of clarity, and on the side of something that’s going to be understandable, you’ll be in much better shape because regular people can get it, and then if you want to, feel free to include the scientific terms or the industry jargon, the lingo…whatever it is, but if somebody lands on your page, and it’s just an opaque wall of scientific stuff, you need to find some way to pull people in to get them interested, to get them enticed in trying to pick up whatever concept it is you want to explain.”

    Okay, it doesn’t sound so bad the way Cutts describes it, and perhaps I’m coming off a little sensational here, but it’s interesting that Cutts used the phrase, “You don’t have to dumb it down that much.”

    This is a topic that we discussed last fall when a Googler Ryan Moulton said in a conversation on Hacker News, “There’s a balance between popularity and quality that we try to be very careful with. Ranking isn’t entirely one or the other. It doesn’t help to give people a better page if they aren’t going to click on it anyways.”

    He then elaborated:

    Suppose you search for something like [pinched nerve ibuprofen]. The top two results currently are http://www.mayoclinic.com/health/pinched-nerve/DS00879/DSECT… and http://answers.yahoo.com/question/index?qid=20071010035254AA…
    Almost anyone would agree that the mayoclinic result is higher quality. It’s written by professional physicians at a world renowned institution. However, getting the answer to your question requires reading a lot of text. You have to be comfortable with words like “Nonsteroidal anti-inflammatory drugs,” which a lot of people aren’t. Half of people aren’t literate enough to read their prescription drug labels: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1831578/

    The answer on yahoo answers is provided by “auntcookie84.” I have no idea who she is, whether she’s qualified to provide this information, or whether the information is correct. However, I have no trouble whatsoever reading what she wrote, regardless of how literate I am.
    That’s the balance we have to strike. You could imagine that the most accurate and up to date information would be in the midst of a recent academic paper, but ranking that at 1 wouldn’t actually help many people.

    This makes for a pretty interesting debate. Should Google bury the most well-researched and accurate information just to help people find something that they can read easier, even if it’s not as high quality? Doesn’t this kind of go against the guidelines Google set forth after the Panda update?

    You know, like these specific questions Google suggested you ask about your content:

  • “Would you trust the information presented in this article?” (What’s more trustworthy, the scientific explanation from a reputable site or auntcookie’s take on Yahoo Answers?)
  • “Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?” (Uh…)
  • “Does the article provide original content or information, original reporting, original research, or original analysis?” (Original research and analysis, to me, suggests that someone is going to know and use the lingo.)
  • “Does the page provide substantial value when compared to other pages in search results?” (Couldn’t value include educating me about the terminology I might not otherwise understand?)
  • “Is the site a recognized authority on its topic?” (You mean the type of authority that would use the terminology associated with the topic?)
  • “For a health related query, would you trust information from this site?” (Again, are you really trusting auntcookie on Yahoo Answers over Mayo Clinic?)
  • “Does this article provide a complete or comprehensive description of the topic?” (Hmm. Complete and comprehensive. You mean as opposed to dumbed down for the layman?)
  • “Does this article contain insightful analysis or interesting information that is beyond obvious?” (I’m not making this up. Here’s Google’s blog post listing these right here.)
  • “Are the pages produced with great care and attention to detail vs. less attention to detail?” (You get the idea.)
  • Maybe I’m missing something, but it seems like Google has been encouraging people to make their content as thorough, detailed, and authoritative as possible. I don’t see “Is your content dumbed down for clarity’s sake?” on the list. Of course that was nearly three years ago.

    If quality is really the goal (as Google has said over and over again in the past), doesn’t the responsibility of additional research and additional clicking of links rest with the searcher? If I don’t understand what the most accurate and relevant result is saying, isn’t it my responsibility to continue to educate myself, perhaps by looking at other sources of information and looking up the things I don’t understand?

    But that would go against Google trying to get users answers as quickly as possible. That must be why Google is trying to give you the answers itself rather than having to send you to third-party sites. Too bad those answers aren’t always reliable.

    Cutts continues in the video, “So I would argue first and foremost, you need to explain it well, and then if you can manage to do that while talking about the science or being scientific, that’s great, but the clarity of what you do, and how you explain it often matters almost as much as what you’re actually saying because if you’re saying something important, but you can’t get it across, then sometimes you never got it across in the first place, and it ends up falling on deaf ears.”

    Okay, sure, but isn’t this just going to encourage users to dumb down content at the risk of educating users less? I don’t think that’s what Cutts is trying to say here, but people are going to do anything they can to get their sites ranked better. At least he suggests trying to use both layman’s terms and the more scientific stuff.

    “It varies,” he says. “If you’re talking only to industry professionals – terminators who are talking about the scientific names of bugs, and your audience is only bugs – terminator – you know, exterminator experts, sure, then that might make sense, but in general, I would try to make things as natural sounding as possible – even to the degree that when I’m writing a blog post, I’ll sometimes read it out loud to try to catch what the snags are where things are gonna be unclear. Anything you do like that, you’ll end up with more polished writing, and that’s more likely to stand the test of time than something that’s just a few, scientific mumbo jumbo stuff that you just spit out really quickly.”

    I’m not sure where the spitting stuff out really quickly thing comes into play here. The “scientific mumbo jumbo” (otherwise known as facts and actual terminology of things) tends to appear in lengthy, texty content, like Moulton suggested, no?

    Google, of course, is trying to get better at natural language with updates like Hummingbird and various other acquisitions and tweaks. It should only help if you craft your content around that.

    “It’s not going to make that much of a difference as far as ranking,” Cutts says. “I would think about the words that a user is going to type, which is typically going to be the layman’s terms – the regular words rather than the super scientific stuff – but you can find ways to include both of them, but I would try to err on the side of clarity if you can.”

    So yeah, dumb it down. But not too much. Just enough. But also include the smart stuff. Just don’t make it too smart.

    What do you think? Should Google dumb down search results to give users things that are easier to digest, or should it be the searcher’s responsibility to do further research if they don’t understand the accurate and well-researched information that they’re consuming? Either way, isn’t this kind of a mixed message compared to the guidance Google has always given regarding “quality” content? Share your thoughts.

    For the record, I have nothing against auntcookie. I know nothing about auntcookie, but that’s kind of the point.

  • Google Moves Link Network Focus To Poland

    Google continues to wipe out link networks across the Internet, apparently on a country-by-country basis.

    Earlier this month, Google’s Matt Cutts told Twitter followers about Google cracking down on German link networks. That is still happening, but focus is also moving over to Poland. Here’s the latest on the situation from Cutts:

    That links to Google’s Poland blog, where the company offers a similar post on unnatural links to the one it put on its German Webmaster blog earlier this month.

    While Google has called out specific link networks by name on numerous occasions, there are no specifics this time.

    Image via YouTube

  • Have Google’s Results Improved After 3 Years Of Panda?

    Monday marked the three-year anniversary of the day Google first announced the controversial Panda update. So much has happened since then. So many sites have felt the effects.

    Has your site been affected by the Panda update at anytime over the past three years? If you were negatively impacted, were you able to recover? Did the update cause you to take steps to “Google-proof” your business? Let us know in the comments.

    To celebrate the occasion, let’s revisit what Google actually said in the original announcement. Matt Cutts and Amit Singhal wrote:

    Our goal is simple: to give people the most relevant answers to their queries as quickly as possible. This requires constant tuning of our algorithms, as new content—both good and bad—comes online all the time.

    Many of the changes we make are so subtle that very few people notice them. But in the last day or so we launched a pretty big algorithmic improvement to our ranking—a change that noticeably impacts 11.8% of our queries—and we wanted to let people know what’s going on. This update is designed to reduce rankings for low-quality sites—sites which are low-value add for users, copy content from other websites or sites that are just not very useful. At the same time, it will provide better rankings for high-quality sites—sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.

    We can’t make a major improvement without affecting rankings for many sites. It has to be that some sites will go up and some will go down. Google depends on the high-quality content created by wonderful websites around the world, and we do have a responsibility to encourage a healthy web ecosystem. Therefore, it is important for high-quality sites to be rewarded, and that’s exactly what this change does.

    It’s worth noting that this update does not rely on the feedback we’ve received from the Personal Blocklist Chrome extension, which we launched last week. However, we did compare the Blocklist data we gathered with the sites identified by our algorithm, and we were very pleased that the preferences our users expressed by using the extension are well represented. If you take the top several dozen or so most-blocked domains from the Chrome extension, then this algorithmic change addresses 84% of them, which is strong independent confirmation of the user benefits.

    So, we’re very excited about this new ranking improvement because we believe it’s a big step in the right direction of helping people find ever higher quality in our results. We’ve been tackling these issues for more than a year, and working on this specific change for the past few months. And we’re working on many more updates that we believe will substantially improve the quality of the pages in our results.

    You’ll notice that Google never mentioned the word Panda. If you’ll recall, nobody knew that was the name of it until a Wired interview with Cutts and Singhal. People had been calling it the “farmer” update because of its apparent purpose of penalizing low-quality content farms.

    Here’s an interesting quote from Singhal from that interview, which some may have forgotten. It was Caffeine that enabled sites to really take advantage of Google in the way that called for the Panda update in the first place:

    So we did Caffeine [a major update that improved Google’s indexing process] in late 2009. Our index grew so quickly, and we were just crawling at a much faster speed. When that happened, we basically got a lot of good fresh content, and some not so good. The problem had shifted from random gibberish, which the spam team had nicely taken care of, into somewhat more like written prose. But the content was shallow.

    Singhal also said recognizing a shallow-content site and defining low quality content was “a very, very hard problem that we haven’t solved.”

    He said solving the problem would be an ongoing evolution. How has it evolved after three years? Has it really gotten better at determining what is high quality?

    Well, at least the eHow toilet specialist that used to rank at the top for “level 4 brain cancer” is no longer on the first page. Whether or not Google’s results in general have improved significantly is debatable.

    At least Google gave some guidelines for what it viewed as quality after a few months. These came in the form of a list of questions for webmasters to ask themselves about their content as guidance on how Google had been looking at the issue.

    “One other specific piece of guidance we’ve offered is that low-quality content on some parts of a website can impact the whole site’s rankings, and thus removing low quality pages, merging or improving the content of individual shallow pages into more useful pages, or moving low quality pages to a different domain could eventually help the rankings of your higher-quality content,” Singhal wrote.

    Since that first Panda update rolled out, Google has launched roughly 25 Panda refreshes and updates. Barry Schwartz has a numbered list with approximate dates. The most recent listed one was last March, but that’s because Google stopped confirming every time they launch one when it became a rolling update. The company did randomly confirm one in July – a “softer” version that was “more finely targeted”.

    A couple months prior, Cutts said, “We’ve also been looking at Panda, and seeing if we can find some additional signals (and we think we’ve got some) to help refine things for the sites that are kind of in the border zone – in the gray area a little bit. And so if we can soften the effect a little bit for those sites that we believe have some additional signals of quality, then that will help sites that have previously been affected (to some degree) by Panda.”

    In some of the most recent Panda guidance Google has offered, Cutts explained in a video back in September (around the time Google announced its biggest overhaul since Caffeine), “It used to be that roughly every month or so we would have a new update, where you’d say, okay there’s something new – there’s a launch. We’ve got new data. Let’s refresh the data. It had gotten to the point, where Panda – the changes were getting smaller, they were more incremental, we had pretty good signals, we had pretty much gotten the low-hanging winds, so there weren’t a lot of really big changes going on with the latest Panda changes. And we said lets go ahead and rather than have it be a discreet data push that is something that happens every month or so at its own time, and we refresh the data, let’s just go ahead and integrate it into indexing.”

    He added, “And so if you think you might be affected by Panda, the overriding kind of goal is to try to make sure that you have high-quality content – the sort of content that people really enjoy, that’s compelling – the sort of thing that they’ll love to read that you might see in a magazine or in a book, and that people would refer back to or send friends to – those sorts of things.”

    “That would be the overriding goal, and since Panda is now integrated with indexing, that remains the goal of entire indexing system,” he said. “So, if your’e not ranking as highly as you were in the past, overall, it’s always a good idea to think about, ‘Okay, can I look at the quality of the content on my site? Is there stuff that’s derivative or scraped or duplicate or just not as useful, or can I come up with something original that people will really enjoy, and those kinds of things tend to be a little more likely to rank higher in our rankings.”

    So in other words, while Google has altered how it implements Panda, not much has changed over the years in terms of what it’s trying to do.

    The update continues to influence how content is created. Businesses (like Mahalo – now Inside) are now going for Google-proof strategies, looking for ways to create content that Google can’t touch with its algorithm. Rap Genius, recently (and briefly) penalized by Google, is another example. These companies are going the mobile app route.

    In recent months, content producers have faced a similar obstacle from a much different traffic source. Facebook made changes to its News Feed algorithm in December that have been described as the social network’s version of the Panda update. It too is supposed to promote high-quality content, though its signals for determining what actually is high-quality leave a lot to be desired.

    So, how has Google done with Panda? Has it accomplished its goals? Have search results improved as a result of the update? After three years, have the deserving sites won the better rankings? Share your thoughts in the comments.

    Image via YouTube

  • Google’s Cutts Talks EXIF Data As A Ranking Factor

    Google may use EXIF data attached to images as a ranking factor in search results. This isn’t exactly a new revelation, but it is the topic of a new “Webmaster Help” video from the company.

    Matt Cutts responds to the submitted question, “Does Google use EXIF data from pictures as a ranking factor?”

    “The short answer is: We did a blog post, in I think April of 2012 where we talked about it, and we did say that we reserve the right to use EXIF or other sort of metadata that we find about an image in order to help people find information,” Cutts says. “And at lest in the version of image search as it existed back then, when you clicked on an image, we would sometimes show the information from EXIF data in the righthand sidebar, so it is something that Google is able to parse out, and I think we do reserve the right to use it in ranking.”

    “So if your’e taking pictures, I would go ahead, and embed that sort of information if it’s available within your camera because, you know, if someone eventually wants to search for camera types or focal lengths or dates or something like that it can be possibly a useful source of information,” he continues. “So I’d go ahead and include it if it’s already there. I wouldn’t worry about adding it if it’s not there. But we do reserve the right to use it as potentially a ranking factor.”

    The blog post he was talking about was called, “1000 Words About Images,” and gives some tips on helping Google index your images, and a Q&A section. In that part, one of the questions is: What happens to the EXIF, XMP and other metadata my images contain?

    The answer was: “We may use any information we find to help our users find what they’re looking for more easily. Additionally, information like EXIF data may be displayed in the right-hand sidebar of the interstitial page that appears when you click on an image.”

    Google has made significant changes to image search since that post was written, causing a lot of sites to get a great deal less traffic from it.

    Image via YouTube

  • Matt Cutts: Backlinks Still Super Important To Search Quality

    In case you were wondering, backlinks are still really important to how Google views the quality of your site. Head of webspam Matt Cutts said as much in the latest “Webmaster Help” video, in which he discusses Google testing a version of its search engine that excludes backlinks as a ranking signal.

    The discussion was prompted when a user asked if Google has a version of the search engine that totally excludes backlink relevance.

    “We don’t have a version like that that is exposed to the public, but we have run experiments like that internally, and the quality looks much, much worse,” he says. “It turns out backlinks, even though there’s some noise, and certainly a lot of spam, for the most part are still a really, really big win in terms of quality for search results.”

    “We’ve played around with the idea of turning off backlink relevance, and at least for now, backlink relevance still really helps in terms of making sure we return the best, most relevant, most topical set of search results,” Cutts adds.

    I wonder how big a role backlinks are playing in these results.

    Image via YouTube

  • Google Says It Will Follow Five Redirects At The Same Time When Crawling

    About a year ago, Google put out a Webmaster Help video discussing PageRank as it relates to 301 redirects. Specifically, someone asked, “Roughly what percentage of PageRank is lost through a 301 redirect?”

    Google’s Matt Cutts responded, noting that it can change over time, but that it had been “roughly the same” for quite a while.

    “The amount of PageRank that dissipates through a 301 is currently identical to the amount of PageRank that dissipates through a link,” he explained. “So they are utterly the same in terms of the amount of PageRank that dissipates going through a 301 versus through a link. So that doesn’t mean use a 301. It doesn’t mean use a link. It means use whatever is best for your purposes because you don’t get to hoard or conserve any more PageRank if you use a 301, and likewise it doesn’t hurt you if you use a 301.”

    In a new Webmaster Central office hours video (via Search Engine Roundtable), Google’s John Mueller dropped another helpful tidbit related to redirects in that GoogleBot will follow up to five at the same time.

    “We generally prefer to have fewer redirects in a chain if possible. I think GoogleBot follows up to five redirects at the same time when it’s trying to crawl a page, so up to give would do within the same cycle. If you have more than five in a chain, then we would have to kind of think about that the next time we crawled that page, and follow the rest of the redirects…We generally recommend trying to reduce it to one redirect wherever possible. Sometimes there are technical reasons why that’s not possible, so something with two redirects is fine.”

    As Barry Schwartz at SER notes, this may be the first time Google has given a specific number. In the comments of his post, Michael Martinez says it used to be 2.

    Image via YouTube

  • Donkey Cutts Is Donkey Kong With Matt Cutts

    Somebody has made a Donkey Kong game with Matt Cutts as the ape. It’s called, appropriately, Donkey Cutts.

    In the game, social signals increase your rank, and you get ten points for jumping over penguins and pandas, or lose the points if they get you. You can also gain points for “awesome content,” and more links and shares. Penalties can be “fixed with a trusty SEO hammer”. You know, like the hammer in Donkey Kong.

    The game comes from NetVoucherCodes.co.uk, and it’s a hell of a lot harder to lose than the real Donkey Kong. Cutts stands at the top like Kong himself, shouting things like “Nofollow any paid links!” and “Make your links from blog comments genuine!”

    We haven’t seen any comment from Cutts yet.

    Via Search Engine Land

    Image via NetVoucherCodes

  • Matt Cutts Talks About A Typical Day In Spam-Fighting

    The latest “Webmaster Help” video from Google is an interesting (and long) one. Google webspam king Matt Cutts talks about a day in the life of someone on the webspam team.

    Here’s the set of questions he answers verbatim:

    What is a day in the life of a search spam team member like? What is the evolution of decisions in terms of how they decide which aspects of the search algorithm to update? Will certain things within the algorithm never be considered for removal?

    He begins by noting that the team is made up of both engineers and manual spam fighters, both of which he addresses separately.

    First, he gives a rough idea of a manual spam-fighter’s day.

    “Typically it’s a mix of reactive spam-fighting and proactive spam-fighting,” he says. “So reactive would mean we get a spam report or somehow we detect that someone is spamming Google. Well, we have to react to that. We have to figure out how do we make things better, and so a certain amount of every day is just making sure that the spammers don’t infest the search results, and make the search experience horrible for everyone. So that’s sort of like not hand to hand combat, but it is saying ‘yes’ or ‘no’ this is spam, or trying to find the spam that is currently ranking relatively well. And then in the process of doing that, the best spam-fighters I know are fantastic at seeing the trends, seeing the patterns in that spam, and then moving into a proactive mode.”

    This would involve trying to figure out how they’re ranking so highly, the loophole they’re exploiting, and trying to fix it at the root of the problem. This could involve interacting with engineers or just identifying specific spammers.

    “Engineers,” he says. “They absolutely look at the data. They absolutely look at examples of spam, but your average day is usually spent coding and doing testing of ideas. So you’ll write up an algorithm that you think will be able to stop a particular type of spam. There’s no one algorithm that will stop every single type of spam. You know, Penguin, for example, is really good at several types of spam, but it doesn’t tackle hacked sites, for example. So if you are an engineer, you might be working on, ‘How do I detect hacked sites more accurately?’”

    He says they would come up with the best techniques and signals they can use, and write an algorithm that tries to catch as many hacked sites as possible while preserving safely the sites that are innocent. Then they test it, and run it across the index or run an experiment with ratings from URLs and see if things look better. Live traffic experiments, seeing what people click on, he says, help them identify what the false positives are.

    On the “evolution of decisions” part of the question, Cutts says, “We’re always going back and revisiting, and saying, ‘Okay, is this algorithm still effective? Is this algorithm still necessary given this new algorithm?’ And one thing that the quality team (the knowledge team) does very well is trying to go back and ask ourselves, ‘Okay, let’s revisit our assumptions. Let’s say if we were starting from scratch, would we do it this way? What is broken, or stale, or outdated, or defunct compared to some other new way of coming up with this?’ And so we don’t just try to have a lot of different tripwires that would catch a lot of different types of spam, you try to come up with elegant ways that will always catch spam, and try to highlight new types of spam as they occur.”

    He goes on for about another three minutes after that.

    Image via YouTube

  • Google Tells You How To Make Infinite Scroll More Search-Friendly

    Does your site utilize infinite scroll? When does right, it improves the user experience, and saves users from having to click over to multiple pages to find the additional info they seek. Have you ever considered how it might affect your search presence?

    Google is offering up some advice on implementing a search-friendly version of infinite scroll, which helps Google recognize and crawl that content.

    “Your site’s news feed or pinboard might use infinite scroll—much to your users’ delight!” Google says on the Webmaster Central blog. “When it comes to delighting Googlebot, however, that can be another story. With infinite scroll, crawlers cannot always emulate manual user behavior–like scrolling or clicking a button to load more items–so they don’t always access all individual items in the feed or gallery. If crawlers can’t access your content, it’s unlikely to surface in search results.”

    For one, you need to make sure you or your CMS creates a paginated series to go along with the infinite scroll, as Google demonstrates below. They also offer a demo here.

    Infinite scroll

    Each item must be accessible, and listed only once in the paginated series.

    “Before you start, chunk your infinite-scroll page content into component pages that can be accessed when JavaScript is disabled,” Google says. “Determine how much content to include on each page. Be sure that if a searcher came directly to this page, they could easily find the exact item they wanted (e.g., without lots of scrolling before locating the desired content). Maintain reasonable page load time. Divide content so that there’s no overlap between component pages in the series (with the exception of buffering).”

    Google also advises that you structure URLs for infinite scroll search engine processing. You can see some examples in the blog post. You should also use rel=next and rel=prev with each component of your pagination configuration. Google will ignore pagination values in the <body> because tey could be created with user-generated content.

    You should also use replaceState/pushState on the infinite scroll page,according to Google. Google suggests pushState for user actions that resemble clicks or turning pages, and providing users with the ability to serially backup through most recently paginated content.

    Finally, Google says you should test your implementation by making sure page values adjust as users scroll and investigating potential usability implications.

    Google itself has considered using infinite scroll on its search results in the past, but has ultimately elected to stick with the paginated version of search results we’re all used to.

    Ultimately, you have to decide how helpful it is to your own site’s experience. Imagine if you had to click over to another page for every ten tweets in your Twitter timeline. Your decision might ultimately come down to how frequently the content on your site is being pushed out.

    Images via Google

  • Cutts: Don’t Worry About Grammatical Errors In Your Blog Comments

    In his latest “Webmaster Help” video, Google’s Matt Cutts answers a question that a lot of people have probably wondered, particularly since Google launched the Panda update in 2011: how do the comments on your blog affect how Google sees the quality of your pages?

    The exact wording of the question was:

    Should I correct the grammar on comments to my WordPress blog? Should I not approve comments with poor grammar? Will approving comments with grammar issues affect my page’s quality rating?

    Long story short: don’t worry about it.

    “I wouldn’t worry about the grammar in your comments. As long as the grammar on your own page is fine, you know, there are people on the Internet, and they write things, and it doesn’t always make sense. You can see nonsense comments, you know, on YouTube and other large properties, and that doesn’t mean a YouTube video won’t be able to rank. Just make sure that your own content is high quality, and you might want to make sure that people aren’t leaving spam comments. You know, if you’ve got a bot, than they might leave bad grammar, but if it’s a real person, and they’re leaving a comment, and the grammar is not slightly perfect, that usually reflects more on them than it does on your site, so I wouldn’t stress out about that.”

    You would think the spam would reflect more on them too, but go ahead and continue stressing out about that.

    Images via YouTube

  • Google Updated The Page Layout Algorithm Last Week

    Google’s Matt Cutts announced on Twitter that the search engine launched a data refresh for its “page layout” algorithm last week.

    If you’ll recall, this is the Google update that specifically looks at how much content a page has “above the fold”. The idea is that you don’t want your site’s content to be pushed down or dwarfed by ads and other non-content material.

    You want it to be simple for users to find your content without having to scroll.

    Cutts first announced the update in January, 2012. He said this at the time:

    Rather than scrolling down the page past a slew of ads, users want to see content right away. So sites that don’t have much content “above-the-fold” can be affected by this change. If you click on a website and the part of the website you see first either doesn’t have a lot of visible content above-the-fold or dedicates a large fraction of the site’s initial screen real estate to ads, that’s not a very good user experience. Such sites may not rank as highly going forward.

    We understand that placing ads above-the-fold is quite common for many websites; these ads often perform well and help publishers monetize online content. This algorithmic change does not affect sites who place ads above-the-fold to a normal degree, but affects sites that go much further to load the top of the page with ads to an excessive degree or that make it hard to find the actual original content on the page. This new algorithmic improvement tends to impact sites where there is only a small amount of visible content above-the-fold or relevant content is persistently pushed down by large blocks of ads.

    The initial update only affected less than 1% of searches globally, Google said. It’s unclear how far-reaching this data refresh is. Either way, if you’ve suddenly lost Google traffic, you may want to check out your site’s design.

    Unlike some of its other updates, this one shouldn’t be too hard to recover from if you were hit.

    You should check out Google’s browser size tool, which lets you get an idea of how much of your page different users are seeing.

    Image via Google