WebProNews

Tag: Webmaster Help Videos

  • Google Talks Determining Quality When There Aren’t Links

    Google has a new Webmaster Help video out talking about how it looks at quality of content that doesn’t have many links pointing to it.

    Specifically, Matt Cutts takes on the following question:

    How does Google determine quality content if there aren’t a lot of links to a post?

    “In general, that sort of reverts back to the way search engines were before links,” he says. “You’re pretty much judging based on the text on the page. Google has a lot of stuff to sort of say OK, the first time we see a word on a page, count it a little bit more. The next time, a little more, but not a ton more. And that after a while, we say, ‘You know what? We’ve seen this word. Maybe this page is about this topic,’ but it doesn’t really help you to keep repeating that keyword over and over and over again. In fact, at some point, we might view that as keyword stuffing, and then the page would actually do less well – not as well as just a moderate number of mentions of a particular piece of text.”

    He continues, “We do have other ways. In theory we could say, ‘Well, does it sit on a domain that seems to be somewhat reputable? There are different ways you can try to assess the quality of content, but typically, if you go back to a user is typing possibly some really rare phrase, if there are no other pages on the web that have that particular phrase, even if there’s not that any links, then that page can be returned because we think it might be relevant. It might be topical to what the user is looking for. It can be kind of tough, but at that point, we sort of have to fall back, and assess based on the quality of the content that’s actually on the text – that’s actually on the page.”

    A few years ago, after the Panda update was first launched, Google shared a list of questions one could ask themselves about their content to get an idea of how Google might view it in terms of quality. You might want to check that out if you haven’t yet.

    Image via YouTube

  • Matt Cutts Talks Google Link Extraction And PageRank

    In a new video, Matt Cutts, Google’s head of webspam, discussed how Google views two links with different anchor text on one page pointing to the same destination, and how that affects PageRank.

    The explanation is Cutts’ response to the following submitted question:

    What impact would two links on a page pointing to the same target, each using different anchor text, have on the flow of PageRank?

    He said, “This is kind of an example of what I think of as dancing on the head of a pin. I’ll try to give you an answer. If you’re telling me that the most important thing for your SEO strategy is knowing what two links from one page do – you know, I understand if people are curious about it – but you might want to step back, and look at the higher mountain top of SEO, and your SEO strategy, and the architecture of your site, and how is the user experience, and how is the speed of the site, and all of that sort of stuff because this is sort of splitting hairs stuff.”

    “So, with that said,” he continued, “looking at the original PageRank paper, if you had two links from one page to another page, both links would flow PageRank, and so the links – the amount of PageRank gets divided evenly (in the original PageRank paper) between all the outgoing links, and so it’s the case that if two links both go to the same page then twice as much PageRank would go to that page. That’s in the original PageRank paper. If they have different anchor text, well that doesn’t affect the flow of PageRank, which is what your question was about, but I’ll go ahead and try to answer how anchor text might flow.”

    “So we have a link extraction process, which is we look at all the links on a page, and we extract those, and we annotate or we fix them to the documents that they point to. And that link extraction process can select all the links, or it might just select one of the links, so it might just select some of the links, and that behavior changes over time. The last time I checked was 2009, and back then, we might, for example, only have selected one of the links from a given page. But again, this is the sort of thing where if you’re really worried about this as a factor in SEO, I think it’s probably worthwhile to take a step back and look at high order bits – more important priorities like how many of my users are actually making it through my funnel, and are they finding good stuff that they really enjoy? What is the design of my homepage? Do I need to refresh it because it’s starting to look a little stale after a few years?”

    There’s that mention of stale-looking sites again.

    The main point here is that you should spend less time nitpicking small things like how much PageRank is flowing from two links on a single page, and what anchor text they’re using, and focus on bigger-picture things that will make your site better. This is pretty much the same message we always hear from the company.

    Perhaps that’s the real reason that Google stopped putting out those monthly lists of algorithm changes.

    Image via YouTube

  • Google: Small Sites Can Outrank Big Sites

    Google: Small Sites Can Outrank Big Sites

    The latest Webmaster Help video from Google takes on a timeless subject: small sites being able to outrank big sites. It happens from time to time, but how can it be done? Do you have the resources to do it?

    Some think it’s simply a lost cause, but in the end, it’s probably just going to depend on what particular area your business is in, and if there are real ways in which you can set yourself apart from your bigger competition.

    Do you see small sites outranking big ones very often? Let us know in the comments.

    This time, Matt Cutts specifically tackles the following question:

    How can smaller sites with superior content ever rank over sites with superior traffic? It’s a vicious circle: A regional or national brick-and-mortar brand has higher traffic, leads to a higher rank, which leads to higher traffic, ad infinitum.

    Google rephrased the question for the YouTube title as “How can small sites become popular?”

    Cutts says, “Let me disagree a little bit with the premise of your question, which is just because you have some national brand, that automatically leads to higher traffic or higher rank. Over and over gain, we see the sites that are smart enough to be agile, and be dynamic, and respond quickly, and roll out new ideas much faster than these sort of lumbering, larger sites, can often rank higher in Google search results. And it’s not the case that the smaller site with superior content can’t outdo the larger sites. That’s how the smaller sites often become the larger sites, right? You think about something like MySpace, and then Facebook or Facebook, and then Instagram. And all these small sites have often become very big. Even Alta Vista and Google because they do a better job of focusing on the user experience. They return something that adds more value.”

    “If it’s a research report organization, the reports are higher quality or they’re more insightful, or they look deeper into the issues,” he continues. “If it’s somebody that does analysis, their analysis is just more robust.”

    Of course, sometimes they like the dumbed down version. But don’t worry, you don’t have to dumb down your content that much.

    “Whatever area you’re in, if you’re doing it better than the other incumbents, then over time, you can expect to perform better, and better, and better,” Cutts says. “But you do have to also bear in mind, if you have a one-person website, taking on a 200 person website is going to be hard at first. So think about concentrating on a smaller topic area – one niche – and sort of say, on this subject area – on this particular area, make sure you cover it really, really well, and then you can sort of build out from that smaller area until you become larger, and larger, and larger.”

    On that note, David O’Doherty left an interesting comment on the video, saying, “I can’t compete with Zillow, Trulia or Realtor on size so I try to focus on the smaller important details, neighborhoods, local events, stuff that matters to people. Focusing on a niche, creating trust with the visitors to your site, providing valuable original content is paramount to success. It’s not easy and takes time and I have a lot of help but it appears to be working.”

    “If you look at the history of the web, over and over again, you see people competing on a level playing field, and because there’s very little friction in changing where you go, and which apps you use, and which websites you visit, the small guys absolutely can outperform the larger guys as long as they do a really good job at it,” he adds. “So good luck with that. I hope it works well for you. And don’t stop trying to produce superior content, because over time, that’s one of the best ways to rank higher on the web.”

    Yes, apparently Google likes good content. Have you heard?

    Do you think the big sites can be outranked by little sites with enough good content and elbow grease? Share your thoughts in the comments.

    Image via YouTube

  • Google’s ‘Rules Of Thumb’ For When You Buy A Domain

    Google has a new Webmaster Help video out, in which Matt Cutts talks about buying domains that have had trouble with Google in the past, and what to do. Here’s the specific question he addresses:

    How can we check to see if a domain (bought from a registrar) was previously in trouble with Google? I recently bought, and unbeknownst to me the domain isn’t being indexed and I’ve had to do a reconsideration request. How could I have prevented?

    “A few rules of thumb,” he says. “First off, do a search for the domain, and do it in a couple ways. Do a ‘site:’ search, so, ‘site: domain.com’ for whatever it is that you want to buy. If there’s no results at all from that domain even if there’s content on that domain, that’s a pretty bad sign. If the domain is parked, we try to take parked domains out of our results anyways, so that might not indicate anything, but if you try do do ‘site:’ and you see zero results, that’s often a bad sign. Also just search for the domain name or the name of the domain minus the .com or whatever the extension is on the end because you can often find out a little of the reputation of the domain. So were people spamming with that domain name? Were they talking about it? Were they talking about it in a bad way? Like this guy was sending me unsolicited email, and leaving spam comments on my blog. That’s a really good way to sort of figure out what’s going on with that site or what it was like in the past.”

    “Another good rule of thumb is to use the Internet Archive, so if you go to archive.org, and you put in a domain name, the archive will show you what the previous versions of that site look like. And if the site looked like it was spamming, then that’s definitely a reason to be a lot more cautious, and maybe steer clear of buying that domain name because that probably means you might have – the previous owner might have dug the domain into a hole, and you just have to do a lot of work even to get back to level ground.”

    Don’t count on Google figuring it out or giving you an easy way to get things done.

    Cutts continues, “If you’re talking about buying the domain from someone who currently owns it, you might ask, can you either let me see the analytics or the Webmaster Tools console to check for any messages, or screenshots – something that would let me see the traffic over time, because if the traffic is going okay, and then dropped a lot or has gone really far down, then that might be a reason why you would want to avoid the domain as well. If despite all that, you buy the domain, and you find out there was some really scuzzy stuff going on, and it’s got some issues with search engines, you can do a reconsideration request. Before you do that, I would consider – ask yourself are you trying to buy the domain just because you like the domain name or are you buying it because of all the previous content or the links that were coming to it, or something like that. If you’re counting on those links carrying over, you might be disappointed because the links might not carry over. Especially if the previous owner was spamming, you might consider just going a disavow of all the links that you can find on that domain, and try to get a completely fresh start whenever you are ready to move forward with it.”

    Cutts did a video about a year ago about buying spamming domains advising buyers not to be “the guy who gets caught holding the bag.” Watch that one here.

    Image via YouTube

  • Cutts Talks SEO ‘Myths,’ Says To Avoid ‘Group Think’

    In the latest “Webmaster Help” video, Matt Cutts talks about SEO “myths”. He responds to this question:

    What are some of the biggest SEO Myths you see still being repeated (either at conferences, or in blogs, etc.)?

    There are a lot of them, he says.

    “One of the biggest, that we always hear,” he says, “is if you buy ads, you’ll rank higher on Google, and then there’s an opposing conspiracy theory, which is, if you don’t buy ads, you’ll rank better on Google, and we sort of feel like we should get those two conspiracy camps together, and let them fight it all out, and then whoever emerges from one room, we can just debunk that one conspiracy theory. There’s a related conspiracy theory or myth, which is that Google makes its changes to try to drive people to buy ads, and having worked in the search quality group, and working at Google for over thirteen years, I can say, here’s the mental model you need to understand why Google does what it does in the search results. We want to return really good search results to users so that they’re happy, so that they’ll keep coming back. That’s basically it. Happy users are loyal users, and so if we give them a good experience on one search, they’ll think about using us the next time they have an information need, and then along the way, if somebody clicks on ads, that’s great, but we’re not gonna make an algorithmic change to try to drive people to buy ads. If you buy ads, it’s not going to algorithmically help your ranking in any way, and likewise it’s not going to hurt your ranking if you buy ads.”

    Google reported its quarterly earnings yesterday with a 21% revenue increase on the company’s own sites (like its search engine) year-over-year. Paid clicks were up 26% during that time.

    Cutts continues with another “myth”.

    “I would say, just in general, thinking about the various black hat forums and webmaster discussion boards, never be afraid to think for yourself. It’s often the case that I’ll see people get into kind of a ‘group think,’ and they decide, ‘Ah ha! Now we know that submitting our articles to these article directories is going to be the best way to rank number one.’ And then six months later, they’ll be like, ‘OK, guest blogging! This is totally it. If you’re guest blogging, you’re gonna go up to number one,’ and a few months before that, ‘Oh, link wheels. You gotta have link wheels if you’re gonna rank number one,’ and it’s almost like fad.”

    To be fair, some of this “group think” stuff has worked for some sites in the past until Google changed its algorithm to stop them from working .

    He suggests that if somebody really had a “foolproof” way to make money online, they’d probably use it to make money rather than putting it in an e-book or tool, and selling it to people.

    “The idea that you’re going to be able to buy some software package, and solve every single problem you’ve ever had is probably a little bit of a bad idea,” he says.

    “It’s kind of interesting how a lot of people just assume Google’s thinking about nothing but the money as far as our search quality, and truthfully, we’re just thinking about how do we make our search results better,” he says.

    Google’s total revenue for the quarter was up 19% year-over-year, which still wasn’t enough to meet investors’ expectations.

    Image via YouTube

  • How To Make Videos Like Matt Cutts’

    The latest “Webmaster Help” video from Google isn’t so much a webmaster help video, but a discussion about how they actually make these videos. It’s meant to give some advice to businesses who want to utilize online video more.

    There are some good, practical tips here for getting started easily and cheaply.

    If you’ve ever wanted to make videos like Matt Cutts’ videos, you should give this one a watch.

    There’s also the possibility of doing live video, of course. A recent report from Ustream finds that business use of live online video will double by 2016.

    Image via YouTube

  • An Update (Kind Of) On How Google Handles JavaScript

    The latest Google Webmaster Help video provides an update on where Google is on handling JavaScript and AJAX. Well, an update on where they were nearly a year ago at least.

    Matt Cutts responds to this question:

    JavaScript is being used more and more to progressively enhance content on page & improve usability. How does Googlebot handle content loaded (AJAX) or displayed (JC&CSS) by Javascript on pageload, on click?

    “Google is pretty good at indexing JavaScript, and being able to render it, and bring it into our search results. So there’s multiple stages that have to happen,” Cutts says. “First off, we try to fetch all the JavaScript, CSS – all those sorts of resources – so that we can put the page under the microscope, and try to figure out, ‘Okay, what parts of this page should be indexed? What are the different tokens or words that should be indexed?’ that sort of thing. Next, you have to render or execute the JavaScript, and so we actually load things up, and we try to pretend as if a real browser is sort of loading that page, and what would that real browser do? Along the way, there are various events you could trigger or fire. There’s the page on load. You could try to do various clicks and that sort of thing, but usually there’s just the JavaScript that would load as you start to load up the page, and that would execute there.”

    “Once that JavaScript has all been loaded, which is the important reason why you should always let Google crawl the JavaScript and the CSS – all those sorts of resources – so that we can execute the page,” he continues. “Once we’ve fetched all those resources, we try to render or execute that JavaScript, and then we extract the tokens – the words that we think should be indexed – and we put that into our index.”

    “As of today, there’s still a few steps left,” Cutts notes. “For example, that’s JavaScript on the page. What if you have JavaScript that’s injected via an iframe? We’re still working on pulling in indexable tokens from JavaScript that are accessible via iframes, and we’re getting pretty close to that. As of today, I’d guess that we’re maybe a couple months away although things can vary depending on engineering resources, and timelines, and schedules, and that sort of thing. But at that point, then you’ll be able to have even included Javascript that can add a few tokens to the page or that we can otherwise index.”

    It’s worth noting that this video was recorded almost a year ago (May 8th, 2013). That’s how long it can take for Google to release these things sometimes. Cutts notes that his explanation reflects that particular point in time. We’re left to wonder how far Google has really come since then.

    There’s that transparency we’re always hearing about.

    He also notes that Google’s not the only search engine, so you may want to think about what other search engines are able to do. He also says Google reserves the right to put limits on how much it’s going to index or how much time it will spend processing a page.

    Image via YouTube

  • Matt Cutts Does His Best HAL 9000

    In the latest “Webmaster Help” video, Google’s Matt Cutts takes on a question from “Dave,” who asks, “When will you stop changing things?”

    “Look, I’m sorry, Dave, but I can’t do that,” he replies.

    Yes, the quote is actually, “I’m sorry, Dave. I’m afraid I can’t do that,” (at least in the movie) but we’re pretty sure that’s what Cutts was going for.

    He goes on to explain that Google is always going to keep changing. Breaking news, I know.

    Also, his shirt changes colors throughout the video.

    Image via YouTube

  • Matt Cutts On How To Get Google To Recognize Your Mobile Pages

    Google has a new “Webmaster Help” video out. This time Matt Cutts discusses optimizing for the mobile web. Specifically, he takes on this submitted question:

    Is there a way to tell Google there is a mobile version of a page, so it can show the alternate page in mobile search results? Or similarly, that a page is responsive and the same URL works on mobile?

    Cutts says this is a very popular question. Google has plenty of information on the subject out there already, but obviously people still aren’t quite grasping it.

    “Don’t block javascript and CSS. That actually makes a pretty big difference,” he says. “If we’re able to fetch the javascript and CSS we can basically try to figure out whether it’s responsive design on our side. So my advice – and I’m going to keep hitting this over and over and over again – is never block javascript and CSS. Go ahead and let Googlebot fetch those, interpret those, figure out whether a site is responsive, and do all sorts of other ways like executing or rendering javascript to find new links, and being able to crawl your website better.”

    “The other way to do it is to have one version of the page for desktop and another version of the page for regular mobile smartphone users, and to have separate URLs,” he continues. “So how do you handle that case correctly? Well, first off, you want to make sure that on your desktop page, you do a rel-alternate that points to the smartphone version of your page. That basically let’s Google know, ‘Yes, these two versions of the same page are related to each other because this is the smartphone version, and this is the desktop version.’ Likewise, on the smartphone version, you want to do a rel=canonical to the desktop version. What that does is it tells Googlebot, ‘Hey, even though this is its own separate URL – while the content is the same – it should really be glommed together with the desktop version.’ And so as long as you have those bi-directional links (a rel-alternate pointing from the desktop to the smartphone and a rel=canonical pointing from the smartphone to the desktop) then Google will be able to suss out the difference between those, and be able to return the correct version to the correct user.”

    “Now there’s one other thing to bear in mind, which is that you can also just make sure that you redirect any smartphone agents from the desktop version to the smartphone version,” Cutts adds. “So we look for that. If we crawl with Googlebot Mobile, and we see that we get redirected to a separate URL then we start to interpret, and say, ‘Ah, it looks like most people are doing that – where they have a desktop version of the page, a smartphone user agent comes in, and they get redirected to a different URL.’ Of course, the thing to bear in mind is just like earlier – we said not to block javascript and CSS – one common mistake that we see is blocking Googlebot Mobile or blocking Googlebot whenever it tries to fetch the smartphone version of the page. Don’t block Googlebot in either case. Just make sure that you return the correct things, and treat Googlebot Mobile like you would treat s smartphone user agent, and treat Googlebot (regular) just like you would treat a desktop user.”

    As long as you follow these best practices, he says, Google will figure it out.

    The video page points to this page on building smartphone-optimized websites, which includes an overview of Google’s recommendations, common mistakes, and more info about various elements of the subject.

    Image via YouTube

  • Matt Cutts Was ‘Trying To Decide How Sassy To Be When Answering This Question’

    Google has put out a new “Webmaster Help video” with the title “Can sites do well without using spammy techniques?”

    That’s a bit rephrased from the actual question that prompted the response:

    Matt, Does (sic) the good guys still stand a chance? We’re a small company that hired an SEO firm that we thought was legit, but destroyed our rankings w/ spam backlinks. We’ve tried everything but nothing helps. What can a company with good intentions do?

    Cutts begins, “We were trying to decide how sassy to be when answering this question, because essentially you’re saying, ‘Do the good guys stand a chance? We spammed, and we got caught, and so we’re not ranking,’ and if you take a step back – if you were to look at that from someone else’s perspective, you might consider yourself a good guy, but you spammed, so the other people might consider you a bad guy, and so the fact that you got caught meant, hey, other good guys who didn’t spam can rank higher. So from their perspective things would be going well. So you’re kind of tying those two together (Do the good guys stand a chance? and We spammed.)”

    Well, technically Matt’s right. They spammed, but it sounds like this person got screwed over by who they hired and paid the price for it. That might be their own fault (if that’s even really what happened), but that does make the story a little more complex than if they did their own spamming and acted like they were still “the good guys”. After all, this has even happened to Google before.

    Cutts continues, “So I think the good guys do stand a chance, and we try hard to make sure that the good guys stand a chance giving them good information, trying to make sure that they can get good information in Webmaster Tools, resources, all sorts of free information and things they can do, and lots of advice. But the good guys stand a chance if they don’t spam, right (laughs)? My advice is you might have to go through a difficult process or reconsideration requests and disavowing links, and whatever it is you need to do (getting links off the web) to clean things up.”

    “I absolutely believe good guys can stand a chance, and small sites and small businesses can stand a chance,” he says. “But I think (this is May 2013) that by the time you get to the end of the summer, I think more and more people will be saying, ‘Okay you stand a chance, but don’t start out going to the black hat forums, trying to spam, trying to do high jinks and tricks because that sort of technique is going to be less likely to work going forward.”

    He notes that it can be harder and take longer to get good links, but that those links will likely stand the test of time.

    “Good luck,” he tells the person who submitted the question. “I hope you are able to get out of your existing predicament, and in the future please tell people before you sign up with an SEO, ask for references, do some research, ask them to explain exactly what they’re going to do. If they tell you they know me, and they have a secret in with the webspam team, you should scream and run away immediately.”

    I wonder how often that works.

    “If they will tell you what they’re doing in clear terms, and it makes sense, and it doesn’t make you feel a little uneasy then that’s a much better sign,” he says.

    Image via YouTube

  • Cutts On How Much Facebook And Twitter Signals Matter In Google Ranking

    Google put out a pretty interesting Webmaster Help video today with Matt Cutts answering a question about a topic a lot of people would like to understand better – how Facebook and Twitter affect Google rankings.

    “Facebook and Twitter pages are treated like any other pages in our web index, and so if something occurs on Twitter or occurs on Facebook, and we’re able to crawl it, then we can return that in our search results,” he says. “But as far as doing special, specific work to sort of say, ‘Oh, you have this many followers on Twitter or this many likes on Facebook,’ to the best of my knowledge, we don’t currently have any signals like that in our web search ranking algorithms.”

    “Now let me talk a little bit about why not,” he continues. “We have to crawl the web in order to find pages on those two web properties, and we’ve had at least one experience where we were blocked from crawling for about a month and a half, and so the idea of doing a lot of special engineering work to try and extract some data from web pages, when we might get blocked from being to crawl those web pages in the future is something where the engineers would be a little bit leery about doing that.”

    “It’s also tricky because Google crawls the web, and as we crawl the web, we are sampling the web at finite periods of time. We’re crawling and fetching a particular web page,” he says. “And so if we’re fetching that particular web page, we know what it said at one point in time, but something on that page could change. Someone could change the relationship status or someone could block a follower, and so it would be a little unfortunate if we tried to extract some data from the pages that we crawled, and we later on found out that, for example, a wife had blocked an abusive husband or something like that, and just because we happened to crawl at the exact moment when those two profiles were linked, we started to return pages that we had crawled.”

    Cutts says they worry a lot about identity because they’re “sampling an imperfect web,” and identity is simply hard.

    “And so unless we were able to get some way to solve that impact, that’s where we had better information, that’s another reason why the engineers would be a little bit wary or a little bit leery of trying to extract data when that data might change, and we wouldn’t know it because we were only crawling the web.”

    Funny, because they don’t seem to be that leery about crawling Wikipedia content, which powers much of Google’s Knowledge Graph, and from time to time leads to erroneous or otherwise unhelpful information being presented as the most appropriate answer to your query. Google has, in the past, presented bad Wikipedia info for hours after it was corrected on Wikipedia itself.

    Cutts goes on to say that he’s not discouraging the use of Twitter and Facebook, and that a lot of people get “a ton of value” from both Facebook and Twitter. He also notes that both are a “fantastic avenue” for driving visitors and traffic to your site, letting people know about news and building up your personal brand. Just don’t assume that Google is able to access any signals from them.

    He also says that over a “multi-year, ten-year kind of span,” it’s clear that people are going to know more about who is writing on the web. Google will be more likely to understand identity and social connections better over that time, he says.

    Image via YouTube

  • Google: Your Various ccTLDs Will Probably Be Fine From The Same IP Address

    Ever wondered if Google would mind if you had multiple ccTLD sites hosted from a single IP address? If you’re afraid they might not take kindly to that, you’re in for some good news. It’s not really that big a deal.

    Google’s Matt Cutts may have just saved you some time and money with this one. He takes on the following submitted question in the latest Webmaster Help video:

    For one customer we have about a dozen individual websites for different countries and languages, with different TLDs under one IP number. Is this okay for Google or do you prefer one IP number per country TLD?

    “In an ideal world, it would be wonderful if you could have, for every different .co.uk, .com, .fr, .de, if you could have a different, separate IP address for each one of those, and have them each placed in the UK, or France, or Germany, or something like that,” says Cutts. “But in general, the main thing is, as long as you have different country code top level domains, we are able to distinguish between them. So it’s definitely not the end of the world if you need to put them all on one IP address. We do take the top-level domain as a very strong indicator.”

    “So if it’s something where it’s a lot of money or it’s a lot of hassle to set that sort of thing up, I wouldn’t worry about it that much,” he adds. “Instead, I’d just go ahead and say, ‘You know what? I’m gonna go ahead and have all of these domains on one IP address, and just let the top-level domain give the hint about what country it’s in. I think it should work pretty well either way.”

    While on the subject, you might want to listen to what Cutts had to say about location and ccTLDs earlier this year in another video.

  • The Latest From Google On Guest Blogging

    The subject of guest blogging has been coming up more and more lately in Google’s messaging to webmasters. Long story short, just don’t abuse it.

    Matt Cutts talked about it in response to a submitted question in a recent Webmaster Help video:

    He said, “It’s clear from the way that people are talking about it that there are a lot of low-quality guest blogger sites, and there’s a lot of low-quality guest blogging going on. And anytime people are automating that or abusing that or really trying to make a bunch of link without really doing the sort of hard work that really earns links on the basis of merit or because they’re editorial, then it’s safe to assume that Google will take a closer look at that.”

    “I wouldn’t recommend that you make it your only way of gathering links,” Cutts added. “I wouldn’t recommend that you send out thousands of blast emails offering to guest blog. I wouldn’t recommend that you guest blog with the same article on two different blogs. I wouldn’t recommend that you take one article and spin it lots of times. There’s definitely a lot of abuse and growing spam that we see in the guest blogging space, so regardless of the spam technique that people are using from month to month, we’re always looking at things that are starting to be more and more abused, and we’re always willing to respond to that and take the appropriate action to make sure that users get the best set of search results.”

    But you already knew that, right?

  • Google Gives Advice On Speedier Penalty Recovery

    Google has shared some advice in a new Webmaster Help video about recovering from Google penalties that you have incurred as the result of a time period of spammy links.

    Now, as we’ve seen, sometimes this happens to a company unintentionally. A business could have hired the wrong person/people to do their SEO work, and gotten their site banished from Google, without even realizing they were doing anything wrong. Remember when Google had to penalize its own Chrome landing page because a third-party firm bent the rules on its behalf?

    Google is cautiously suggesting “radical” actions from webmasters, and sending a bit of a mixed message.

    How far would you go to get back in Google’s good graces? How important is Google to your business’ survival? Share your thoughts in the comments.

    The company’s head of webspam, Matt Cutts, took on the following question:

    How did Interflora turn their ban in 11 days? Can you explain what kind of penalty they had, how did they fix it, as some of us have spent months try[ing] to clean things up after an unclear GWT notification.

    As you may recall, Interflora, a major UK flowers site, was hit with a Google penalty early this year. Google didn’t exactly call out the company publicly, but after reports of the penalty came out, the company mysteriously wrote a blog post warning people not to engage in the buying and selling of links.

    But you don’t have to buy and sell links to get hit with a Google penalty for webspam, and Cutts’ response goes beyond that. He declines to discuss a specific company because that’s not typically not Google’s style, but proceeds to try and answer the question in more general terms.

    “Google tends to looking at buying and selling links that pass PageRank as a violation of our guidelines, and if we see that happening multiple times – repeated times – then the actions that we take get more and more severe, so we’re more willing to take stronger action whenever we see repeat violations,” he says.

    That’s the first thing to keep in mind, if you’re trying to recover. Don’t try to recover by breaking the rules more, because that will just make Google’s vengeance all the greater when it inevitably catches you.

    Google continues to bring the hammer down on any black hat link network it can get its hands on, by the way. Just the other day, Cutts noted that Google has taken out a few of them, following a larger trend that has been going on throughout the year.

    The second thing to keep in mind is that Google wants to know your’e taking its guidelines seriously, and that you really do want to get better – you really do want to play by the rules.

    “If a company were to be caught buying links, it would be interesting if, for example, [if] you knew that it started in the middle of 2012, and ended in March 2013 or something like that,” Cutts continues in the video. “If a company were to go back and disavow every single link that they had gotten in 2012, that’s a pretty monumentally epic, large action. So that’s the sort of thing where a company is willing to say, ‘You know what? We might have had good links for a number of years, and then we just had really bad advice, and somebody did everything wrong for a few months – maybe up to a year, so just to be safe, let’s just disavow everything in that timeframe.’ That’s a pretty radical action, and that’s the sort of thing where if we heard back in a reconsideration request that someone had taken that kind of a strong action, then we could look, and say, ‘Ok, this is something that people are taking seriously.”

    Now, don’t go getting carried away. Google has been pretty clear since the Disavow Links tool launched that this isn’t something that most people want to do.

    Cutts reiterates, “So it’s not something that I would typically recommend for everybody – to disavow every link that you’ve gotten for a period of years – but certainly when people start over with completely new websites they bought – we have seen a few cases where people will disavow every single link because they truly want to get a fresh start. It’s a nice looking domain, but the previous owners had just burned it to a crisp in terms of the amount of webspam that they’ve done. So typically what we see from a reconsideration request is people starting out, and just trying to prune a few links. A good reconsideration request is often using the ‘domain:’ query, and taking out large amounts of domains which have bad links.”

    “I wouldn’t necessarily recommend going and removing everything from the last year or everything from the last year and a half,” he adds. “But that sort of large-scale action, if taken, can have an impact whenever we’re assessing a domain within a reconsideration request.”

    In other words, if your’e willing to go to such great lengths and eliminate such a big number of links, Google’s going to notice.

    I don’t know that it’s going to get you out of the penalty box in eleven days (as the Interflora question mentions), but it will at least show Google that you mean business, and, in theory at least, help you get out of it.

    Much of what Cutts has to say this time around echoes things he has mentioned in the past. Earlier this year, he suggested using the Disavow Links tool like a “machete”. He noted that Google sees a lot of people trying to go through their links with a fine-toothed comb, when they should really be taking broader swipes.

    “For example, often it would help to use the ‘domain:’ operator to disavow all bad backlinks from an entire domain rather than trying to use a scalpel to pick out the individual bad links,” he said. “That’s one reason why we sometimes see it take a while to clean up those old, not-very-good links.”

    On another occasion, he discussed some common mistakes he sees people making with the Disavow Links tool. The first time someone attempts a reconsideration request, people are taking the scalpel (or “fine-toothed comb”) approach, rather than the machete approach.

    “You need to go a little bit deeper in terms of getting rid of the really bad links,” he said. “So, for example, if you’ve got links from some very spammy forum or something like that, rather than trying to identify the individual pages, that might be the opportunity to do a ‘domain:’. So if you’ve got a lot of links that you think are bad from a particular site, just go ahead and do ‘domain:’ and the name of that domain. Don’t maybe try to pick out the individual links because you might be missing a lot more links.”

    And remember, you need to make sure you’re using the right syntax. You need to use the “domain:” query in the following format:

    domain:example.com

    Don’t add an “http” or a ‘www” or anything like that. Just the domain.

    So, just to recap: Radical, large-scale actions could be just what you need to take to make Google seriously reconsider your site, and could get things moving more quickly than trying single out links from domains. But Google wouldn’t necessarily recommend doing it.

    Oh, Google. You and your crystal clear, never-mixed messaging.

    As Max Minzer commented on YouTube (or is that Google+?), “everyone is going to do exactly that now…unfortunately.”

    Yes, this advice will no doubt lead many to unnecessarily obliterate many of the backlinks they’ve accumulated – including legitimate links – for fear of Google. Fear they won’t be able to make that recovery at all, let alone quickly. Hopefully the potential for overcompensation will be considered if Google decides to use Disavow Links as a ranking signal.

    Would you consider having Google disavow all links from a year’s time? Share your thoughts in the comments.

  • Matt Cutts Talks Content Stitching In New Video

    Google has a new Webmaster Help video out about content that takes text from other sources. Specifically, Matt Cutts responds to this question:

    Hi Matt, can a site still do well in Google if I copy only a small portion of content from different websites and create my own article by combining it all, considering I will mention the source of that content (by giving their URLs in the article)?

    “Yahoo especially used to really hate this particular technique,” says Cutts. “They called it ‘stitching’. If it was like two or three sentences from one article, and two or three sentences from another article, and two or three sentences from another article, they really considered that spam. If all you’re doing is just taking quotes from everybody else, that’s probably not a lot of added value. So I would really ask yourself: are you doing this automatically? Why are you doing this? Why? People don’t just like to watch a clip show on TV. They like to see original content.”

    I don’t know. SportsCenter is pretty popular, and I don’t think it’s entirely for all the glowing commentary. It’s also interesting that he’s talking about this from Yahoo’s perspective.

    “They don’t just want to see an excerpt and one line, and then an excerpt and one line, and that sort of thing,” Cutts continues. “Now it is possible to pull together a lot of different sources, and generate something really nice, but you’re usually synthesizing. For example, Wikipedia will have stuff that’s notable about a particular topic, and they’ll have their sources noted, and they cite all of their sources there, and they synthesize a little bit, you know. It’s not like they’re just copying the text, but they’re sort of summarizing or presenting as neutral of a case as they can. That’s something that a lot of people really enjoy, and if that’s the sort of thing that you’re talking about, that would probably be fine, but if you’re just wholesale copying sections from individual articles, that’s probably going to be a higher risk area, and I might encourage you to avoid that if you can.”

    If you’re creating good content that serves a valid purpose for your users, my guess is that you’ll be fine, but you know Google hates anything automated when it comes to content.

  • Cutts Talks Disavow Links Tool And Negative SEO

    Google has put out a new Webmaster Help video discussing the Disavow Links tool, and whether or not it’s a good idea to use it even when you don’t have a manual action against your site.

    Google’s Matt Cutts takes on the following question:

    Should webmasters use the disavow tool, even if it is believed that no penalty has been applied? For example, if we believe ‘negative SEO’ has been attempted, or spammy sites we have contacted have not removed links.

    As Cutts notes, the main purpose of the tool is for when you’ve done some “bad SEO” yourself, or someone has on your behalf.

    “At the same time, if you’re at all worried about someone trying to do negative SEO or it looks like there’s some weird bot that’s building up a bunch of links to your site, and you have no idea where it came from, that’s a perfect time to use Disavow as well.”

    “I wouldn’t worrying about going ahead and disavowing links even if you don’t have a message in your webmaster console. So if you have done the work to keep an active look on your backlinks, and you see something strange going on, you don’t have to wait around. Feel free to just preemptively say, ‘This is a weird domain. I have nothing to do with it. I don’t know what this particular bot is doing in terms of making links.’ Just feel free to go ahead and do disavows, even on a domain level.”

    As Cutts has said in the past, feel free to use the tool “like a machete“.

  • Cutts: Use Schema Video Markup For Pages With Embedded YouTube Videos

    Cutts: Use Schema Video Markup For Pages With Embedded YouTube Videos

    There is a lot of webmaster interest these days in the impact schema markup has on content in search results.

    Today’s Webmaster Help video from Google addresses video markup. Matt Cutts takes on the following submitted question:

    Rich snippets are automatically added to SERPs for video results from YouTube. Is it recommended to add schema video markup onsite in order to get your page w/embedded video to rank in SERPs in addition to the YouTube result or is this redundant?

    Cutts says he checked with a webmaster trends analyst, and they said, “Yes, please get them to add the markup.”

    He says, “In general, you know, the more markup there is (schema, video or whatever), the easier it is for search engines to be able to interpret what really matters on a page. The one thing that I would also add is, try to make sure that you let us crawl your JavaScript and your CSS so that we can figure out the page and ideally crawl the video file itself, so that we can get all the context involved. That way if we can actually see what’s going on on the video play page, we’ll have a little bit better of an idea of what’s going on with your site. So yes, I would definitely use the schema video markup.”

    There you have it. The answers Cutts gives in these videos aren’t always so straight forward, but this pretty much gives you a direct answer, and one which can no doubt be applied to other types of content beyond video. Use as much markup as you can, so Google (and other search engines) can understand your site better.

  • Matt Cutts On Creating More Content For Better Google Rankings

    You may think that having more webpages increases your chances of getting better Google rankings. Well, you might be right. Kind of.

    This is the topic of the latest Google Webmaster Help video from Matt Cutts.

    “I wouldn’t assume that just because you have a large number of indexed pages that you’ll automatically get a higher ranking,” Cutts explains. ” That’s not the case. It is the case that if you have more pages that have different keywords on them, then you have the opportunity where they might be able to rank for the individual queries that a user has. But just having more pages doesn’t automatically mean that you’ll be in good shape or that you’ll get some sort of ranking boost.”

    “Now, typically, if a site does have more pages, it might mean that it has more links pointing to it, which means it has higher PageRank,” he continues. “If that’s the case, we might be willing to crawl a little bit deeper into the website, and if it has higher PageRank, then we might think that’s a little bit of a better match for users’ queries. So those are some of the factors involved. Just having a big website with a lot of pages by itself doesn’t automatically confer a boost, but if you have a lot of links or a lot of PageRank, which is leading to deeper crawling within your site, then that might be the sort of indicator that perhaps your site would rank just a little bit higher.”

    Cutts again reiterates that just having more pages won’t get you better rankings, but will create more opportunities for links, which can lead to rankings.

    So, the takeaway here is that creating more content is probably a good thing. Just compelling stuff that people might want to link to. Unfortunately, that kind of stuff typically takes more time and energy.

  • How Google Evaluates The Merit Of A Guest Blog Post

    It’s Matt Cutts video time again. This time, he answers the question: “How can I guest blog without it looking like I pay for links?”

    “Let’s talk about, for example, whenever we get a spam report, and we dig into it in the manual webspam team, usually there’s a pretty clear distinction between an occasional guest blog versus someone who is doing large scale pay-for-links kinds of stuff,” he says, “So what are the different criteria on that spectrum? So, you know, if you’re paying for links, it’s more likely that it’s off topic or an irrelevant blog post that doesn’t really match the subject of the blog itself, it’s more likely you’ll see the keyword-rich anchor text, you know, that sort of thing. Whereas a guest blog, it’s more likely to be someone that’s expert, you know. There will usually be something – a paragraph there that talks about who this person is, why you invited them to be on your blog. You know, hopefully the guest blogger isn’t dropping keywords in their anchors nearly as much as you know, these other sorts of methods of generating links.”

    “So it is interesting,” he continues. “In all of these cases, you can see a spectrum of quality. You can have paid links with,you know, buy cheap viagra, all that sort of stuff. You can have article marketing, where somebody doesn’t even have a relationship with the blog, and they just write an article – 500 words or whatever – and they embed their keyword-rich anchor text in their bio or something like that, and then you’ve got guest blogging, which, you know, can be low quality, and frankly, I think there’s been a growth of low quality guest blogging recently. Or it can be higher quality stuff where someone really is an expert, and you really do want their opinion on something that’s especially interesting or relevant to your blog’s audience.”

    These are the kinds of criteria Google looks at when trying to determine if something is spam, Cutts says. He also cautions against spinning content for contribution on a bunch of blogs. That’s not the best way to build links, he says.

    Go here for more past comments from Google related to guest blog posts.

  • Matt Cutts Talks Having Eggs In Different Baskets

    In today’s Webmaster Help video, Google’s Matt Cutts has to explain that Google always adjusts its search results, and always has, and why you shouldn’t put all your eggs in one basket.

    It’s times like this when one realizes that no question is too bad for one of these videos to address, so if you’ve always wanted to ask something, but were afraid to, let this give you the confidence you need to proceed.

    You can tell by the way he closes his laptop he’s answered questions like this a few times before.