WebProNews

Tag: SEO

  • Now Webmasters Have Google App Indexing To Think About

    Google announced on Thursday that it is testing app indexing with Android apps. This, the company says, will create a seamless user experience across mobile apps and websites, when it comes to search results pages. With more and more searches coming from mobile devices, the addition of app indexing is long overdue.

    Do you expect Google’s new app indexing to change your search strategy? Do you intend to focus more on mobile apps? Let us know in the comments.

    Googlebot will now crawl and index content within Android apps, meaning that Google searches from mobile devices can point users directly to relevant content in your app, as opposed to your website, when it makes sense to do so.

    “Searchers on smartphones experience many speed bumps that can slow them down,” writes product manager Lawrence Chang in a blog post. “For example, any time they need to change context from a web page to an app, or vice versa, users are likely to encounter redirects, pop-up dialogs, and extra swipes and taps. Wouldn’t it be cool if you could give your users the choice of viewing your content either on the website or via your app, both straight from Google’s search results?”

    App Indexing

    “If both the webpage and the app contents are successfully indexed, Google will then try to show deep links to your app straight in our search results when we think they’re relevant for the user’s query and if the user has the app installed,” Chang explains. “When users tap on these deep links, your app will launch and take them directly to the content they need.”

    Android users in the U.S. (with the Google Search App 2.8+ and Android 4.1+) will start seeing deep links from apps in their search results in the coming weeks. App indexing is still in the testing phase, however, and Google is starting out with apps from Allthecooks, AllTrails, Beautylish, Etsy, Expedia, Flixster, Healthtap, IMDB, Moviefone, Newegg, OpenTable, and Trulia.

    Still, you can get the process started to enable Google to index content from your apps. Google has a form you can fill out if you want to get in on the testing.

    Google says app indexing will not impact ranking. In a Q&A, the company says, “App indexing does not impact on your website’s ranking in the search results page. It does affect how a search result of your website is displayed, namely by adding a button to open the content in your app if the user has the app installed.”

    While it may not have a direct effect on ranking in the sense that just because you’re pointing Google to app content rather than web content it will make a difference, apps are sometimes more user-friendly than web content, particularly on mobile devices, and it’s hard to see Google not taking that into account when ranking results.

    In other words, if you are able to provide a better user experience from your mobile app than you are from your webpage, why wouldn’t Google rank it better? It’s something to think about, and could lead to more businesses placing more focus on mobile apps. The industry will no doubt be watching how the results appear as Google shows more of them.

    Of course, the user has to have the app installed to access its content, which is obviously a significant barrier. Some will likely have more success if they have a presence in other popular apps. It will be especially interesting for Ecommerce merchants, for example, to see how content from apps like Etsy do in search results.

    “Just like crawling a website, Google uses many signals to determine the frequency at which your app is crawled,” Google says in the Q&A. “As a rule of thumb, it will be a similar frequency at which your website is crawled.”

    Good to know.

    Google also notes that like web-only sitemaps, you can have more than one sitemap for your app content.

    The company is working on surfacing relevant information in Webmaster Tools about letting webmasters know if their app indexing is actually working.

    To get started, you’ll need to annotate app links for the pages on your site that can be opened in your app to specify how the content can be opened in the app, and add intent filters for deep linking. You can check out the documentation here.

    It’s going to be interesting to see if this has any substantial impact on Android app development in general, and if Google starts indexing content in apps on other platforms.

    Do you think Google’s new app indexing is a game changer? Let us know in the comments.

  • Google App Indexing Will List Deep Links From Android Apps In Search Results

    Google announced today that it is testing app indexing, which it says is aimed at creating a “seamless” user experience across sites and mobile apps from search results pages.

    Googlebot will now index content in Android apps, and webmasters will be able to let Google know which app they’d like Google to index through their existing sitemaps file and through Webmaster Tools.

    “Searchers on smartphones experience many speed bumps that can slow them down,” writes product manager Lawrence Chang in a blog post. “For example, any time they need to change context from a web page to an app, or vice versa, users are likely to encounter redirects, pop-up dialogs, and extra swipes and taps. Wouldn’t it be cool if you could give your users the choice of viewing your content either on the website or via your app, both straight from Google’s search results?”

    “If both the webpage and the app contents are successfully indexed, Google will then try to show deep links to your app straight in our search results when we think they’re relevant for the user’s query and if the user has the app installed,” Chang explains. “When users tap on these deep links, your app will launch and take them directly to the content they need.”

    Like so:

    App Indexing

    App indexing is only being tested with a limited number of developers for now. These reportedly include: Allthecooks, AllTrails, Beautylish, Etsy, Expedia, Flixster, Healthtap, IMDB, Moviefone, Newegg, OpenTable, and Trulia.

    Android users in the U.S. will start seeing deep links in apps from search results in a few weeks, Google says.

    Documentation for getting it set up on your own app is available here.

  • Google’s ‘Misinformation Graph’ Strikes Again

    Users have encountered another blunder from the Google Knowledge Graph with Google showing some quite questionable content, and presenting it as “knowledge” on a very high-traffic search term. This is only the latest in a series of misfires from the Knowledge Graph, but probably the highest profile example yet, given the search term.

    Do you consider Google’s results to be reliable? Let us know in the comments.

    The term is “st. louis cardinals”. As you may know, the team is currently in the World Series, so it stands to reason there are a lot of searches happening for that particular term. It’s currently number five for baseball teams on Google Trends:

    Search for “st. louis cardinals” on Google right now, and you’ll probably see a Knowledge Graph result that looks something like this:

    Cardinals knowledge graph

    Okay, looks legit. Last night, however, things looked a little different, as Ben Cook pointed on Twitter (via RustyBrick).

    Yep, it really said that. That’s not a photoshop job. As David Goldstick pointed out, a Wikipedia revision had been made earlier, but Google hadn’t updated its cache. You can see the revision here:

    Cardinals Wiki

    We’ve reached out to Google for comment on update timing, and will update if we hear back.

    Update: We just got a response from Google’s Jason Freidenfelds, who tells us, “We crawl sources at different rates; for fast-changing info it can be within hours. But in this case it was a technical issue on our end that let outdated information through. We’ve fixed the issue.”

    It’s unclear exactly how long this text appeared in Google’s search results, but it was at least for a few hours, according to Rusty. And the Cardinals did play a World Series game, so quite a few people probably saw it. Some even accused Google of being a Red Sox fan:

    We’ve talked about the reliability and credibility of Google’s Knowledge Graph results a few times in the past, mainly because things keep happening. In fact, it hasn’t even been a month since the last mistake we saw, when Google was showing an image of the singer/actress Brandy for brandy the drink.

    Brandy

    After a little media coverage, they appear to have corrected it, but it took them a while, even after said coverage. They couldn’t blame Wikipedia on that one because the Wikipedia page for the drink showed a drink.

    There have been other cases where Google has shown erroneous info in the Knowledge Graph. A while back, for example, it got a football player’s marital status wrong.

    As I’ve said before, the errors may be few and far between, but how can users know for sure whether or not they can trust the information Google is providing as “knowledge”? Typically, users aren’t going to question the information they see here unless it’s obviously wrong.

    In the case of the St. Louis Cardinals, it was obviously a prank, but people looking to spread misinformation can be a lot more clever than that. There’s no telling how much factually incorrect info Google is highlighting to users at any given time. Even if Google is able to quickly correct it, people can still be seeing it. As we see with the Cardinals example this can even happen on major search queries.

    In some cases, we’ve even seen Google promoting brands on generic queries.

    Meanwhile, Google continues to expand the Knowledge Graph to more types of queries, and to provide more types of information, potentially opening the door to more errors.

    A side effect of Google’s Knowledge Graph is that people have less of a reason to click over to other websites. When Google is presenting the “answer” to their queries right in the search results, why bother to look further? You just assume it’s the correct answer. Not that there isn’t going to be misinformation on third-party sites, but at least going in, users can decide for themselves how much they want to trust a particular source. I think most probably trust Google enough to assume they’re displaying factual info on their search results page.

    Google’s voice search also draws from the Knowledge Graph to provide users with answers, and this kind of searching is only gaining momentum as smartphone use grows. Users count on Google to give them factual information when they don’t point them elsewhere. Should they second guess the info they’re getting every time they get an “answer”?

    Part of the issue is Wikipedia’s own credibility. Is sourcing the majority of the Knowledge Graph to Wikipedia a good idea in the first place?

    This comes at an interesting time at Wikipedia itself. Last week, executive director Sue Gardner announced that Wikipedia had already shut down hundreds of accounts for paid edits. People have been manipulating Wikipedia for their own monetary gain, and apparently, some of the higher-ups had allowed it to happen, which is why it was even able to. Gardner expressed “shock and dismay” over the whole thing, and the investigation is ongoing.

    Gardner, by the way, announced earlier this year that she was stepping down from her position, saying she was “uncomfortable” with where the Internet is heading. While Wikipedia, in general, has been an invaluable source of information for years, these things make you question its reliability, and by default, the reliability of Google’s Knowledge Graph, which leans on Wikipedia so heavily for its information. This is, by the way, where the Internet is headed – at least where Internet search is headed.

    “This is a critical first step towards building the next generation of search,” Google’s Amit Singhal said in introducing the Knowledge Graph.

    And in case you’re thinking about Bing, it has a practically identical feature (though I’ve not seen any reports about the Cardinals blunder related to Bing).

    It’s very possible – perhaps likely – that the majority of the answers and information that Google’s Knowledge Graph feeds you is completely accurate, but if you’re ever searching for anything important (remember, Knowledge Graph includes nutrition and medical knowledge now), you may do well to remember that St. Louis Cardinals example, and continue your research. Verify the important facts. If Google can get it so wrong on such a hot search query, it can probably get it wrong on more obscure stuff.

    Is this the direction search should be going in? Do you trust the Knowledge Graph? Share your thoughts in the comments.

  • Matt Cutts On Creating More Content For Better Google Rankings

    You may think that having more webpages increases your chances of getting better Google rankings. Well, you might be right. Kind of.

    This is the topic of the latest Google Webmaster Help video from Matt Cutts.

    “I wouldn’t assume that just because you have a large number of indexed pages that you’ll automatically get a higher ranking,” Cutts explains. ” That’s not the case. It is the case that if you have more pages that have different keywords on them, then you have the opportunity where they might be able to rank for the individual queries that a user has. But just having more pages doesn’t automatically mean that you’ll be in good shape or that you’ll get some sort of ranking boost.”

    “Now, typically, if a site does have more pages, it might mean that it has more links pointing to it, which means it has higher PageRank,” he continues. “If that’s the case, we might be willing to crawl a little bit deeper into the website, and if it has higher PageRank, then we might think that’s a little bit of a better match for users’ queries. So those are some of the factors involved. Just having a big website with a lot of pages by itself doesn’t automatically confer a boost, but if you have a lot of links or a lot of PageRank, which is leading to deeper crawling within your site, then that might be the sort of indicator that perhaps your site would rank just a little bit higher.”

    Cutts again reiterates that just having more pages won’t get you better rankings, but will create more opportunities for links, which can lead to rankings.

    So, the takeaway here is that creating more content is probably a good thing. Just compelling stuff that people might want to link to. Unfortunately, that kind of stuff typically takes more time and energy.

  • Google Is Apparently Reducing Authorship In Results

    Matt Cutts spoke at Pubcon in Las Vegas, discussing numerous SEO topics as usual. Bruce Clay has a pretty good basic recap here.

    You can see 25 minutes of his speech here:

    There doesn’t appear to be a whole lot of big news to come out of the keynote. He discussed a lot of the things Google has been doing that everybody already knows about. He did say that Google is going to be working on combatting hacking and child porn in the coming months, and noted that the reason that Toolbar PageRank hasn’t been updated is because the export feature that sends the data to the toolbar broke, and they didn’t bother to fix it. It’s unclear if they will bother in the future. My guess is no.

    Trends for webmasters to think about going forward, according to Cutts, include making sure your site looks good on mobile devices, annotating your forms for autocomplete and rich snippets (on reputable sites). Google is also getting better at Javascript. Meanwhile, the page layout algorithm will start having a greater impact on Arabic and Russian sites.

    One interesting nugget to come out of Cutts’ speech is that Google is apparently going to be reducing the amount of authorship results it shows by 15%, saying that this will improve quality.

    Google reportedly still sees authorship as a key signal, they just want to “tighten” it to make sure it’s really relevant and useful, from what I gather.

  • Did Google Make The Right Call With This Algorithm Change?

    Did Google Make The Right Call With This Algorithm Change?

    Google recently launched an update to its algorithm to take action against sites that post mugshots of people, and charge money to have them removed. While the move could go a long way in keeping people’s online reputations from suffering irreparable damages, at least one of the sites targeted thinks the update is actually putting people in danger by hiding criminal behavior.

    Do you think Google made the right move in implementing this algorithm change? Let us know what you think in the comments.

    The news came as The New York Times posted an in-depth report on the practice and Google’s high rankings of results from such sites. Google, however, said that the update has been in the works for most of the year.

    The basic gist of the article was that there are a bunch of sites out there that make money by gathering mugshots (which are in the public domain), and then get ranked in the search results for name searches for the people who appear in the shots. Before this update, these sites were ranking very well in Google, and causing major reputation-damaging problems for the people. And we’re not talking just hardened criminals, murderers and sex offenders here. We’re talking about people who were arrested, but never convicted, people that made minor mistakes, and have repaid their “debt to society,” and others who simply don’t deserve to have a mugshot be the first thing that comes up in a Google search for their name when they’re trying to get a job.

    Really, this can hurt not only the people looking for jobs, but also businesses that may be missing out on highly qualified talent due to these results tarnishing the image of the prospect. And what if want of these individuals are looking to start their own businesses?

    The sites charge money to have the damaging content removed. According to the report, this can sometimes be as much as $400, and even one the person pays one site, the same content is likely to appear on similar sites. That very fact might be one of the reasons that Google decided to take action, because historically, Google hasn’t much cared about removing reputation-damaging content unless legally required to do so.

    This week, Mugshots.com, one of the sites named in the NYT piece, has put out a press release attacking Google and its algorithm change with a big ol’ “no free speech” symbol at the top:

    Mugshots.com on free speech

    The release discusses an article the site posted on its blog, and says:

    While individuals arrested for minor offenses or never convicted enjoy the attention of sympathetic news media, what gets lost in the emotional mix is previously a Google search also returned results showing the criminal history of individuals arrested for extremely serious crimes as well as convictions. Except in extremely limited situations discussed in our article, that’s not the case anymore. Thanks to Google’s algorithm change, there is now an enormous public safety blind spot that puts every person in the country at potential risk who performs a Google search on someone with a criminal history—that number is in the millions. Google’s algorithm change does not discriminate; it protects and shields the sympathetic and the truly wicked alike at the expense of public safety and the ability to make meaningful informed decisions by millions of Americans.

    A person’s arrest, even for a minor offense and/or for which the person was never convicted, is always relevant information to the individual performing the search. That’s one important piece of information people naturally want to take into consideration when making an informed decision. However, Google has made the determination for all Americans that you shouldn’t have easy access to public information of an indisputable fact and undeniably relevant by intentionally concealing it.

    Google is the go-to place for information. If information isn’t there, it simply doesn’t exist for most Internet users. It’s not an overstatement to say that with its algorithm change Google has effectively hidden from public view the criminal history of most individuals arrested and convicted in this country. While arrest records are available at government websites, they almost never appear during a search of a person’s name even when the person has been arrested and convicted. Prior to the algorithm change, a simple search of just about anyone with a criminal history appeared prominently in search results with a link to a website that publishes mugshots. Google cannot, with a clear conscience, now deprive millions of Americans access to vital public records with a shrug and note that they’re available elsewhere.

    For example, news articles have been written about a particular Illinois attorney, but if he were like most arrestees, there wouldn’t be any news coverage of his arrest for stealing $1.2 million from seven clients. For potential clients performing Google searches on him now, the most relevant search result would be his BBB rating of ‘A+’, nothing about his arrest as was the case before the algorithm modification. Similarly, there is the case of an Ohio babysitter arrested after videotape surfaced of her raping an infant in her care. If news stories weren’t written about her as is the case with most arrests, anyone performing a Google search on her wouldn’t be alerted to the disgusting allegations against her since no government website with her arrest appears in the early pages of a Google search. Even though both individuals have “only” been arrested, isn’t that information you’d consider relevant in deciding whether to allow them into your life? Google doesn’t think so. To learn more about them and the disturbing unintended consequences of Google’s decision, please read our full article.

    You can read it here if you like, but you can probably get the basic premise from the above text.

    While there might be some legitimate points made within Mugshot.com’s writings about unintended consequences (we’ve certainly seen those with other Google updates), they really don’t address one of the main points of the media coverage of Google’s update, which is that of sites charging people (even those without criminal charges) to have their mugshots removed – the apparent business model of such sites.

    It’s also unclear how any of this is a free speech issue.

    Either way, the story does highlight the control Google has over the flow of information on the Internet. It’s true that this info is still out there, but Google has such a huge share of the search market, it requires people to step outside of their comfortable habits to find information other ways. But Google is not keeping these sites from continuing their practices. Google doesn’t have quite that much power. And doesn’t Google have the same “free speech” right to dictate the kinds of results it wants to show people on its own site?

    Google hasn’t really talked about this update a whole lot other than to acknowledge its existence (at least from what I’ve seen). They did give the NYT this statement:

    “Our team has been working for the past few months on an improvement to our algorithms to address this overall issue in a consistent way. We hope to have it out in the coming weeks.”

    If Google opened up more about it, I wouldn’t be surprised to learn that this is more about search quality than Google taking sides on the reputations of third-parties. That’s not really Google’s style. Not when it comes to lawful content.

    That bit about unintended consequences is likely to be in the back of some webmasters’ minds too. Google’s updates are rarely (if ever) perfect, and sometimes there are unintended casualties. In this case those casualties may never know if they were impacted by this update, as the timing of it was very close to the latest Penguin update. Lets hope there aren’t people chasing an impossible Penguin recovery as a result.

    What do you think of the “Mugshot” update? Does Mugshots.com have a point, or is Google doing the right thing? Share your thoughts in the comments.

    Images: PRNewsWire and JustMugShots

  • How Google Evaluates The Merit Of A Guest Blog Post

    It’s Matt Cutts video time again. This time, he answers the question: “How can I guest blog without it looking like I pay for links?”

    “Let’s talk about, for example, whenever we get a spam report, and we dig into it in the manual webspam team, usually there’s a pretty clear distinction between an occasional guest blog versus someone who is doing large scale pay-for-links kinds of stuff,” he says, “So what are the different criteria on that spectrum? So, you know, if you’re paying for links, it’s more likely that it’s off topic or an irrelevant blog post that doesn’t really match the subject of the blog itself, it’s more likely you’ll see the keyword-rich anchor text, you know, that sort of thing. Whereas a guest blog, it’s more likely to be someone that’s expert, you know. There will usually be something – a paragraph there that talks about who this person is, why you invited them to be on your blog. You know, hopefully the guest blogger isn’t dropping keywords in their anchors nearly as much as you know, these other sorts of methods of generating links.”

    “So it is interesting,” he continues. “In all of these cases, you can see a spectrum of quality. You can have paid links with,you know, buy cheap viagra, all that sort of stuff. You can have article marketing, where somebody doesn’t even have a relationship with the blog, and they just write an article – 500 words or whatever – and they embed their keyword-rich anchor text in their bio or something like that, and then you’ve got guest blogging, which, you know, can be low quality, and frankly, I think there’s been a growth of low quality guest blogging recently. Or it can be higher quality stuff where someone really is an expert, and you really do want their opinion on something that’s especially interesting or relevant to your blog’s audience.”

    These are the kinds of criteria Google looks at when trying to determine if something is spam, Cutts says. He also cautions against spinning content for contribution on a bunch of blogs. That’s not the best way to build links, he says.

    Go here for more past comments from Google related to guest blog posts.

  • Matt Cutts Talks Having Eggs In Different Baskets

    In today’s Webmaster Help video, Google’s Matt Cutts has to explain that Google always adjusts its search results, and always has, and why you shouldn’t put all your eggs in one basket.

    It’s times like this when one realizes that no question is too bad for one of these videos to address, so if you’ve always wanted to ask something, but were afraid to, let this give you the confidence you need to proceed.

    You can tell by the way he closes his laptop he’s answered questions like this a few times before.

  • Google Hummingbird And Structured Data: Is There A Connection?

    There has been a lot written about Google’s Hummingbird algorithm since it was announced. Unfortunately, very little of it comes in the form of reliable facts that really give anybody a solid understanding of what they can do to help their sties. Mostly, just a lot of speculation and theory. But to be fair, isn’t that how most of this stuff usually goes?

    I’m not going to pretend to have the answers either, but one particular topic that has come up repeatedly throughout discussions about Hummingbird is that of structured data. To be clear, we’re not going to sit here and tell you that using it is going to have a direct impact on your rankings or even that it’s directly related to Hummingbird. Rather, we’ll simply examine some things Google has said, and some thoughts from others in the industry about whether or not it makes a difference. You know, for discussion’s sake.

    Do you think implementing structured data on your site will have more of a benefit under Hummingbird? Let us know what you think.

    Paul Bruemmer wrote a piece at Search Engine Land called Future SEO: Understanding Entity Search, which starts off talking about Hummingbird.

    In that he says, “This is Google’s solution for evolving from text links to answers. Such a system will display more precise results faster, as it’s based on semantic technology focused on user intent rather than on search terms. To review Google’s progress in this direction: first came the Knowledge Graph, then Voice Search and Google Now — all providing answers, and sometimes even anticipating the questions. To serve these answers, Google relies on entities rather than keywords.”

    We reached out to him for some further thoughts on the subject, particularly with regards to structured data. He told us, “Webmasters have good reason to implement structured data, Hummingbird is a continued example of Google using semantic technology to ‘understand’ vs. ‘index.’”

    “Google has been rolling-out incremental changes related to structured data over the past five years,” he added. “It is a clear path for providing better search results for all the engines, Google, Yahoo! and Bing. Structured data is valued by search engines and it is being used (aggregated) to enhance the SERPs, providing a better user experience producing higher click-through-rates (CTR). Currently there’s not enough scientific raw data to correlate structured data with improved rankings. However, observation and experience from qualified SEO practitioners suggest the influence of structured data is at the top of the tactical To Do list.”

    Google Webmaster Tools shows the following types on the Structured Data page (as listed and discussed here): schema.org, microformats, microdata, RDFa and data you’ve tagged in data highlighter.

    “Schema.org (which is microdata) and RDFa are both syntax (new meta data) adopted and approved by Google, Yahoo! and Bing,” he continued. “Google’s ‘Data Highlighter’ is for Google’s index only; webmasters using the data highlighter are creating semantic markup which is only visible to Google. In my opinion, there is a deeper and longer-term value when using Schema.org (microdata) consumable and visible to all search engines e.g., a Web of structured data. GWT Structured Data page currently provides a limited view and will hopefully be enhanced to include much more information moving forward. There is a deeper semantic strategy involved, ‘How To’ training and recommendations for webmasters is being developed as the semantic and academic communities converge with Internet communities.”

    Bill Slawski from SEO By The Sea wonders why people think Hummingbird has anything to do with Schema, but also notes that ” Schema is putting information into a framework that makes it easier for Google to extract information.”

    Slawski has been critical of all the “gibberish” and “rubbish” that is being written about Hummingbird, saying that “95% of the articles about it aren’t very good.”

    He recently put out this story on the “Hummingbird Patent”.

    One story he recommends, however, is one from Ammon Johns. It is indeed a good read. Still, Johns doesn’t exactly downplay the use of structured data.

    Johns writes in a comment on the article, “All that schema markup you’re being told to add? This is directly applicable to machine learning – helping the machine by ensuring a good input data-set has consistency for better processing and learning. Eventually, even a search for your own name will turn up a Google Knowledge Graph result and only link to your own site bio as an afterthought, if at all.”

    In a Search Engine Land article, Eric Ward points to words from Google’s Amit Singhal when Hummingbird was announced:

    “The change needed to be done, Singhal said, because people have become so reliant on Google that they now routinely enter lengthy questions into the search box instead of just a few words related to specific topics.”

    Ward says, “So there’s Clue #1. Searchers enter questions. Google wants to give them answers — fast, accurately, and preferably without having to leave Google.com.”

    There could be something to that “without having to leave Google.com” part. Google has clearly been looking to keep users on Google.com more and more over time, providing answers from the result page when possible.

    “Good content with strong backlinks may no longer be enough,” Ward says. “This may be painful to hear, but logic dictates that if Google is anticipating longer search phrases and answering questions directly, then that means even if you have a great answer to that same question and your page containing that answer ranks at position 4, the end user may never see it or click on it because Google has answered the question for them.”

    He gives the example of looking for Peyton Manning stats on Google. Even if you have the best NFL stats site on the web, he notes, Google is going to give you something that looks like this:

    Peyton Manning stats

    Google wants to give users answers without having to leave Google. If your site has that answer, in some cases, structured data may help Google understand that you have that answer.

    Of course, this means that if users are getting the answers without having to leave Google,. they’re not going to have to bother going to other sites (like yours). We talked about this in Is It Worth It To Your Site To Help Google Build Its ‘Knowledge’?

    “If Google understands the content on your pages, we can create rich snippets—detailed information intended to help users with specific queries,” Google says. “For example, the snippet for a restaurant might show the average review and price range; the snippet for a recipe page might show the total preparation time, a photo, and the recipe’s review rating; and the snippet for a music album could list songs along with a link to play each song. These rich snippets help users recognize when your site is relevant to their search, and may result in more clicks to your pages.”

    Google supports rich snippets for reviews, people, products, businesses and organizations, recipes, events and music, and recognizes markup for video content. Does your site fit into any of these areas?

    This isn’t new, but doesn’t it just make sense to do everything you can to help Google more easily understand the content on your site in the era of Hummingbird, when Google is trying to give people the answers to their questions?

    Keep in mind that Hummingbird is a redux of Google’s algorithm. It’s not a signal. It’s the algorithm. Why not take advantage of an existing signal, which is part of Google’s larger algorithm that wants to answer people’s questions? Just saying.

    What do you think? Is structured data worth doing? Share your thoughts in the comments.

    Image: Thinkstock

  • Matt Cutts On Geo-location: Just Treat Googlebot Like Every Other User

    In the latest Google Webmaster Help video, Matt Cutts responds to a question about geo-location:

    Using geo-detection techniques is against Google, I am offering the useful information (price, USPs) to the users based on the geo-location. Will Google consider this as spam? i.e. showing X content to search engines and Y content to users.

    “Geo-location is not spam,” he says. “As long as you’re showing, ‘Oh, someone’s coming from a French IP address, let’s redirect them to the French version of my page or the French domain for my business,’ that’s totally fine. Someone comes in from a German IP address, I’ll redirect them over to the German version of my page – that’s totally fine. The thing that I would do is make sure that you don’t treat search engines any differently than a regular user. So if Googlebot comes in, you check the IP address, imagine that we’re coming from the United States, just redirect Googlebot to the United States version of your page or the .com, – whatever it is that you would serve to regular United States users. So geo-location is not spam. Google does it.”

    “Whenever users come in, we send them to what we think is the most appropriate page based on a lot of different signals, but usually the IP address of the actual user,” he adds.

    The last part of the question (the X content to search engines and Y content to users part), he says, is cloaking, and is something he would be “very careful about.”

    The point is, just treat Googlebot like every other user, and you should be fine.

  • Google Penguin Update 2.1 Is Bigger Than Your Average Penguin Refresh

    Google’s Matt Cutts announced late on Friday that Penguin 2.1 was launching, affecting roughly 1% of searches “to a noticeable degree.”

    Did you notice? Has the update had any impact on your own rankings (positive or negative)? Let us know in the comments.

    This is the first official Penguin announcement we’ve seen since Google revealed its initial Penguin revamp, with 2.0 in May.

    Penguin 2.0 was the biggest tweak to Penguin since the update initially launched in April of last year, which was why it was called 2.0 despite the update getting several refreshes in between.

    Cutts said this about Penguin 2.0 back when it rolled out: “So this one is a little more comprehensive than Penguin 1.0, and we expect it to go a little bit deeper, and have a little bit more of an impact than the original version of Penguin.”

    Penguin 2.0 was said to affect 2.3% of queries with previous data refreshes only impacting 0.1% and 0.3%. The initial Penguin update affected 3.1%. While this latest version (2.1) may not be as big as 2.0 or the original, the 1% of queries affected still represents a significantly larger query set than the other past minor refreshes.

    Hat tip to Danny Sullivan for the numbers. The folks over at Search Engine Land, by the way, have been keeping a list of version numbers for these updates, which differs from Google’s actual numbers, so if you’ve been going by those, Danny sorts out the confusion for you.

    Penguin, of course, is designed to attack webspam. Here’s what Google said about it in the initial launch:

    The change will decrease rankings for sites that we believe are violating Google’s existing quality guidelines. We’ve always targeted webspam in our rankings, and this algorithm represents another improvement in our efforts to reduce webspam and promote high quality content. While we can’t divulge specific signals because we don’t want to give people a way to game our search results and worsen the experience for users, our advice for webmasters is to focus on creating high quality sites that create a good user experience and employ white hat SEO methods instead of engaging in aggressive webspam tactics.

    Word is that Penguin 2.1 has had a big impact on webmasters.

    “We have threads at WebmasterWorld, Black Hat Forums, tons at Google Webmaster Help, Threadwatch and many others,” forum watcher Barry Scwhartz said on Monday. “Keep in mind, this was announced late Friday afternoon and the threads are just going to get worse when more people check their analytics after the weekend is over, sometime this morning.”

    “I’ve seen screen shots of Google Analytics showing websites completely destroyed by this update,” he added. “I’ve also seen screen shots of Google Analytics showing websites that recovered in a major way from previous Penguin updates. This had huge swings both ways for webmasters and SEOs. Some recovered and are back in business, while others are about to lose their businesses.”

    On figuring out if your site was affected by Penguin 2.1, Kristi Kellogg from global Internet marketing firm Bruce Clay, Inc. says, “BCI recommends monitoring your organic traffic in Google Analytics over the next two weeks, looking for a dip. A dip in traffic occurring on this date may indicate that your site has been hit by this update. In some cases, you might see an increase in traffic, which would indicate an outranking competitor took a blow from Penguin 2.1.”

    On combating Penguin 2.1, she says to clean up your backlink profile, adding, “Going forward, move away from a ‘link building’ mindset. Over and over, Cutts has warned that links should not be your main focus. The main focus should be the user experience. Focus on creating and promotion quality content, that is useful, has value — in short, create something compelling that people will want to share. If you do succeed, natural links will be a byproduct of your efforts.”

    And remember, Cutts and co. are paying attention to what people in the “black hat” forums are saying.

    As you can see, Penguin is still part of Google’s Hummingbird algorithm. That was an overhaul of Google’s larger algorithm, which still includes various pieces, such as Penguin and Panda. So if you were expecting those pieces to go away for some reason, let Penguin 2.1 be your wake-up call.

    More of our past Penguin coverage here.

    Have you seen any noticeable effects of Penguin? Hummingbird? Let us know in the comments.

    Image: Batman: Arkham City

  • Matt Cutts Tells You Why Your Site’s PageRank Isn’t Changing (Kind Of)

    The question of whether or not Google Toolbar PageRank is dead has come back around, as Google’s Matt Cutts indicated the other day that we’re not likely to see an update before the end of the year.

    This Twitter exchange occurred on Sunday (via Barry Schwartz):

    Google hasn’t updated it since February, after historically updating it every three or four months.

    The latest Webmaster Help video from Google has come out, and it just happens to talk about Google Toolbar PageRank. Unfortunately, it doesn’t really tell us anything new. It’s pretty much the same thing Cutts said last time he did a video on it.

    He says, “The thing to remind yourself about is that the Google Toolbar PageRank, number one, it’s only updated periodically, so, you know, for a while, we would update it relatively often. Now, we’ll update it a few times a year. Over time, the Toolbar PageRank is getting less usage just because recent versions of Internet Explorer don’t really let you install toolbars as easily, and Chrome doesn’t have the toolbar so over time, the PageRank indicator will probably start to go away a little bit.”

    “But it’s also the case that we only update this information every few months, so it does take time in order to show up,” he says.

    The messaging here is a little odd considering, again, that there hasn’t been an update since February, and we shouldn’t expect to see one before the end of the year.

    Update: Cutts notes that the video was recorded several months ago:

    Image: Google

  • SEO That Helped NYT Investigate Mugshot Sites Shares Further Insights Into Their Rankings

    In case you haven’t heard by now, Google pushed out an update late last week aimed at demoting shady sites that prey on people who have publicly available mugshots, and charge them to have the images removed.

    The New York Times published a big investigative report about the practice and Google’s response. The Times had Doug Pierce, the founder of SEO company Cogney dig in and study some of the mugshot sites in question. The piece didn’t delve too much into the optimization behind the sites, but Pierce himself has since put up a blog post about the topic, which he pointed us to in an email.

    Here’s an excerpt discussing the sites’ backlinks:

    I’d sum up all 3 site’s backlink profiles as a combination of: people angry with them, people in support of them (freedom of information ralliers), people using their mugshot photos as sources, and spam generated by the sites themselves (mostly comment spam which seems to have slowed). It’s also interesting that there are some names that these sites specifically build links for. It seems to be a mixture of celebrities, gangsters, and people in the news like Tamerlan Tsarnaev. What they’re doing is trying to rank for “[famous/infamous person’s name] + mugshot” which is harder to do than ranking for random Joe Schmoe who got arrested thus link building is necessary.

    The most interesting links though come from media coverage of the mugshot sites. By talking about how sites like mugshots.com impairs lives and is a paid unpublishing scam, they often link to the sites in question, passing the news organization’s authority to them and in turn boosting their authority.

    He points to links from Gizmodo, Poynter, The New Yorker and SF Weekly, which link to the sites, but don’t include nofollow attributes, so they’re passing PageRank.

    As he notes in the post, as well as in the NYT piece, these sites also cater to “the long click,” which essentially equates to time on site. People who click these results from Google probably aren’t quickly going back to the results page. They’re seeing why the person in question is on a mugshot site, and possibly looking around at other pages on the site, which is a sign of quality content, right?

    People are also likely to click on the results simply because they are what they are. If you’re searching for someone, and a mugshot result comes up, you’re going to click on it. This, as Pierce points out, is a signal of quality itself.

    As noted earlier, while Google’s update appears to have helped in some cases, it didn’t work for the first example the New York Times gave in its piece.

    Image: Doug Pierce (Google+)

  • Google’s ‘Mugshot’ Update Misses New York Times’ First Example

    Google pushed out a new algorithm update late last week (in addition to Penguin) aimed at penalizing sites that profit from public mugshots by making people who appear in them pay to have them removed. It’s been described as a racket, and has tarnished the online reputations of people, including those who were never actually convicted of any crimes.

    The story came to light over the weekend, with the New York Times publishing an in-depth piece about the sites that engage in this, the victims, and what Google is doing about it. Google confirmed to the newspaper that it launched the aforementioned update on Thursday.

    Danny Sullivan from Search Engine Land took to Twitter to imply that Google seemed to be acting in response to the New York Times article, despite the problem being pointed out earlier this year in an opinion piece by Jonathan Hochman and Jonah Stein on his own Search Engine Land site. Google’s Matt Cutts, however, said that Google has been working on this update for months, and that the SEL piece contributed to getting the team to work on it. The timing of the launch, as it corresponds to the NYT piece, however, is certainly interesting.

    Either way, the update is out there, but now there are reports that it isn’t doing what it’s supposed to in all cases – most notably, in a specific case outlined in the NYT piece itself. Maxwell Birnbaum, who was mentioned in the intro to the article still has a result from mugshots.com appear as the top result when you search for his name. The site highlights its “unpublish mugshot” service right at the top of the page.

    Unpublish Mugshot

    As Sullivan notes, the update does appear to have helped another victim from the NYT piece – Janese Trimaldi – but even in her results, you don’t have to go too far into Google’s results to find a DirtSearch.org link, pointing to an arrests lookup for for her name. At least her LinkedIn profile is at the top. And of course, the New York Times piece itself appears in the results.

    As Google often says, no algorithm is perfect, but you’d hope it would at least get the first example mentioned in the story that exposed this all to the general public.

    [via Julio Fernandez/Danny Sullivan]

  • Matt Cutts Indicates You Should Not Hold Your Breath For A PageRank Update

    Back in August, we discussed Google’s lack of a toolbar PageRank update, and speculated that it might simply be dead. Based on recent comments by Google’s Matt Cutts, we’re still leaning towards this.

    Barry Schwartz points to a Twitter response from Cutts to a question about it:

    Roughly translated: Don’t hold your breath.

    The last update came in early February. Historically, Google has typically updated the data about every three or four months. Last year, it was updated four times. Obviously things have changed. Even if it’s not dead, it’s even less useful than before, being updated so infrequently. It might as well be dead.

    Cutts said in a video earlier this year, “It might be the case that, it might be such that over time, maybe the PageRank feature is not used by as many people, and so maybe it will go away on its own or eventually we’ll reach the point where we say, ‘Okay, maintaining this is not worth the amount of work.’”

    That time may have already come and gone.

  • Google Launches Update To Demote Shady Mugshot Sites

    As previously reported, Google announced the launch of Penguin 2.0 on Friday, but that’s not the only algorithmic change the search giant has reportedly pushed out. Sometime late last week, they also pushed an update to demote mugshot sites that charge people to have their images removed.

    This is a pretty interesting story. The New York Times published an in-depth look at such sites, speaking to a variety of parties, including a guy who runs one of them, some who have been negatively impacted by them, an SEO, and spokespeople from Google, MasterCard and PayPal. It’s definitely worth a read.

    The basic gist is that there are a bunch of sites out there that make money by gathering mugshots (which are in the public domain), and then getting ranked in the search results for name searches for the people who appear in the shots. Before this update, these sites were ranking very well in Google, and causing major reputation-damaging problems for the people. And we’re not talking just hardened criminals, murderers and sex offenders here. We’re talking about people who were arrested, but never convicted, people that made minor mistakes, and have repaid their “debt to society,” and others who simply don’t deserve to have a mugshot be the first thing that comes up in a Google search for their name when they’re trying to get a job.

    The sites charge money to have the damaging content removed – sometimes up to $400 – and even when one pays one site, the same content is likely on a bunch of other sites. This is possibly one reason why Google actually acted on this.

    When the Times first reached out to Google, a spokesperson told them that “with very narrow exceptions, we take down as little as possible from search.”

    This is more in line with what you’d expect from Google. The company is frequently asked to take down search results by people who feel their reputations are being damaged, but always resists, until they’re legally obligated to do so.

    After a couple of days, however, the Times got another statement from the same spokesperson who said he was unaware of certain efforts at the compny:

    “Our team has been working for the past few months on an improvement to our algorithms to address this overall issue in a consistent way. We hope to have it out in the coming weeks.”

    The Times says it learned that Google worked even faster than expected on this, and introduced the algorithm change on Thursday.

    “The effects were immediate: on Friday, two mug shots of Janese Trimaldi, which had appeared prominently in an image search, were no longer on the first page,” reports the Times’ David Segal.

    This is a pretty big moment for reputation management because (as mentioned), Google doesn’t typically respond. Clearly, the company felt that these sites were violating its guidelines, and acted accordingly.

    It will be interesting to see if any other types of sites were affected unintentionally. With the latest Penguin update launching so close to this, we may never know.

  • Google Gives Advice On Making Your Site Available In More Languages

    Google has released a new video aimed at helping webmasters make their sites available in more languages. The latest Webmaster Help video comes from Developer Programs Tech Lead Maile Ohye.

    Specifically, the video discusses details about rel=”alternate” and its implementation on multilingual/multinational sites.

    It’s seventeen minutes long, so if you don’t want to sit through the whole thing, there’s a transcript on the video page that you can skim through.

    Google provides a list of resources on the topic here.

    If this is something that you’re interested in, you may also want to check out this video from Matt Cutts from a couple years ago, when he discussed duplicate content and languages, and cautioned against using Google Translate to auto-translate your site.

    Image: Google

  • Google: Don’t Use Nofollow On Internal Links

    In the latest Webmaster Help video, Google’s Matt Cutts discusses the use of rel=”nofollow” on internal links, addressing the following user-submitted question:

    Does it make sense to use rel=”nofollow” for internal links? Like, for example, to link to your login page/ Does it really make a difference?

    “Okay, so let me give you the rules of thumb,’ he begins. “I’ve talked about this a little bit in the past, but it’s worth mentioning again. rel=’nofollow’ means the PageRank won’t flow through that link as far as discovering the link, PageRank computation [and] all that sort of stuff. So, for internal links – links within your site – I would try to leave the nofllow off, so if it’s a link from one page on your site to another page on your site, you want that PageRank to flow. You want Googlebot to be able to find that page. So almost every link within your site – that is a link going from one page on your site to another page on your site – I would make sure that the PageRank does flow, which means leaving off the nofollow link.”

    “Now, this question goes a little bit deeper, and it’s a little more nuanced,” Cutts continues. “It’s talking about login pages. It doesn’t hurt if you want to put a nofollow pointing to a login page or to a page that you think is really useless like a terms and conditions page or something like that, but in general, it doesn’t hurt for Googlebot to crawl that page because it’s not like we’re gonna submit a credit card to make an order or try to log in or something like that.”

    He goes on to note that you would probably want a nofollow on pages pointing to other sites, like in cases where people abuse comment systems. The general rule for internal linking, however, is to go ahead and let the PageRank flow, and let Googlebot learn all about your site. Even in cases where you don’t want Google to crawl the page, you might as well just use noindex, he says.

    He also suggests that login pages can still be useful for some searchers.

    Image: Google

  • Is Google Dumbing Down Search Results?

    There has been an interesting discussion about Google and search quality this week thanks to comments made by a Googler who suggested that a site with higher quality, better information is not always more useful.

    Wait, what?

    Hasn’t Google been pounding the message of “high quality content is how you rank well in Google” in everybody’s heads for years? Well, sometimes dumbed down is better. Apparently.

    Do you believe there are times when Google should not be providing the most high-quality search results at the top of the rankings? Tell us what you think.

    Web developer Kris Walker has started a site called The HTML and CSS Tutorial (pictured), which he aims to make a super high quality resource for beginner developers to learn the tricks of the trade. The goal is to get its content to rank well in search engines – specifically to rank better than content from W3Schools, which he finds to be lackluster.

    “The search results for anything related to beginner level web development flat out suck,” he writes.

    “So the plan is to create a site, which I’m calling The HTML and CSS Tutorial, with the goal of winning the search engine battle for beginner level web development material,” he says. “To do this it needs to have the best learning material on the web (or close to it), along with comprehensive HTML, CSS, and JavaScript reference material. It needs to provide high quality content coupled with an information architecture that will get a beginner up to speed, meeting their immediate needs, while allowing them to go through a comprehensive course of material when they are ready.”

    Okay, so it sounds like he’s got the right attitude and strategy in mind for getting good search rankings. You know, creating high quality content. This is what Google wants. It has said so over and over (and over and over) again. The Panda update completely disrupted the search rankings for many websites based on this notion that high quality, informative content is king when it comes to search visibility. It makes sense. Above all else, people searching for content want to land on something informative, authoritative and trustworthy, right?

    Well, not always, according to one Googler.

    Walker’s post appeared on Hacker News, and generated a fair amount of comments. One user suggests that higher quality sites are often further down in the search results because they’re not as popular as the sites that are ranked higher.

    Google’s Ryan Moulton comments, “There’s a balance between popularity and quality that we try to be very careful with. Ranking isn’t entirely one or the other. It doesn’t help to give people a better page if they aren’t going to click on it anyways.”

    In a later comment, Moulton elaborates:

    Suppose you search for something like [pinched nerve ibuprofen]. The top two results currently are http://www.mayoclinic.com/health/pinched-nerve/DS00879/DSECT… and http://answers.yahoo.com/question/index?qid=20071010035254AA…
    Almost anyone would agree that the mayoclinic result is higher quality. It’s written by professional physicians at a world renowned institution. However, getting the answer to your question requires reading a lot of text. You have to be comfortable with words like “Nonsteroidal anti-inflammatory drugs,” which a lot of people aren’t. Half of people aren’t literate enough to read their prescription drug labels: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1831578/

    The answer on yahoo answers is provided by “auntcookie84.” I have no idea who she is, whether she’s qualified to provide this information, or whether the information is correct. However, I have no trouble whatsoever reading what she wrote, regardless of how literate I am.
    That’s the balance we have to strike. You could imagine that the most accurate and up to date information would be in the midst of a recent academic paper, but ranking that at 1 wouldn’t actually help many people. This is likely what’s going on between w3schools and MDN. MDN might be higher quality, better information, but that doesn’t necessarily mean it’s more useful to everyone.

    Wow, so as far as I can tell, he’s pretty much saying that Google should be showing dumber results for some queries based on the notion that people won’t be smart enough to know what the higher quality results are talking about, or even capable enough to research further and learn more about the info they find in the higher quality result. If you’re interpreting this a different way, please feel free to weigh in.

    Note: For me, at least, the Mayo Clinic result is actually ranking higher than the Yahoo Answers result for the “pinched nerve ibuprofen” query example Moulton gave. I guess literacy prevailed after all on that one.

    If Google is actually actively dumbing down search results, this seems somewhat detrimental for society, considering the enormous share of the search market Google holds.

    Meanwhile, Google itself is only getting smarter. On Thursday, Google revealed that it has launched its biggest algorithm change in twelve years, dubbed Hummingbird. It’s designed to enable Google to better understand all of the content on the web, as it does the information in its own Knowledge Graph. I hope they’re not dumbing down Knowledge Graph results too, especially considering that it is only growing to cover a wider range of data.

    Well, Google’s mission is to organize the world’s information and make it universally accessible and useful.” There’s nothing about quality, accuracy, or better informing people in there.

    Hat tip to Search Engine Roundtable for pointing to Moulton’s comments.

    Should Google assume that people won’t understand (or further research) the highest-quality content, and point them towards lesser-quality content that is easier to read? Let us know what you think.

    Image: htmlandcsstutorial.com

  • Hummingbird Is Google’s Biggest Algorithm Change In 12 Years

    Let’s get one thing straight right up front. Hummingbird is not a new algorithm update like Panda or Penguin. It’s a new algorithm. Panda and Penguin are parts of the bigger algorithm. Hummingbird is the actual bigger algorithm. Google has been around for fifteen years now, and Hummingbird is apparently the biggest thing they’ve done to the algorithm in twelve.

    Do you think Hummingbird is going to have a significant impact on your ability to rank in search results? For better or for worse? Let us know what you think in the comments.

    The good news for webmasters who fear being struck down by any major changes that Google makes to its algorithm is that it launched a month ago, so if you weren’t hit by it Panda/Penguin style (there haven’t been many complaints), you probably don’t need to worry much about it. At least not in the immediate term.

    Google announced the algorithm update at a press event on Thursday along with some other interface and Knowledge Graph tweaks. After that ended we learned that Hummingbird was described as the biggest Google algorithm change since Caffeine, and that it is designed to let Google quickly parse entire questions and complex queries and return relevant answers, as opposed to looking at queries on a keyword-by-keyword basis.

    For all intents and purposes, Google is apparently trying to do what it does with its own Knowledge Graph with the rest of the the web. Your web. The web made up of your websites and everyone else’s. At least that’s what it sounds like. Hummingbird is to help Google understand your webpages the way it understands the data in its Knowledge Graph. We’ll see how that goes.

    Longtime search industry reporter Danny Sullivan was at the event, and spoke with Google’s Amit Singhal and Ben Gomes afterwards. He got to talk to them a little bit more about Hummingbird. From this, we learn that Google calls the algorithm “Hummingbird” because it’s “precise and fast”. Singhal also reportedly told Sullivan that it hasn’t been since 2001 that the algorithm was “so dramatically rewritten” (Sullivan’s words).

    “Hummingbird should better focus on the meaning behind the words,” Sullivan reports. “It may better understand the actual location of your home, if you’ve shared that with Google. It might understand that ‘place’ means you want a brick-and-mortar store. It might get that ‘iPhone 5s’ is a particular type of electronic device carried by certain stores. Knowing all these meanings may help Google go beyond just finding pages with matching words.”

    “In particular, Google said that Hummingbird is paying more attention to each word in a query, ensuring that the whole query — the whole sentence or conversation or meaning — is taken into account, rather than particular words,” he adds. “The goal is that pages matching the meaning do better, rather than pages matching just a few words…Hummingbird is designed to apply the meaning technology to billions of pages from across the web, in addition to Knowledge Graph facts, which may bring back better results.”

    So from the sound of it, this is really just an extension of Google’s ongoing strategy to become less dependent on keywords, which does have implications for SEO, and while webmasters may not have to worry about a major drop-off in rankings like with updates like Panda or Penguin, this could be more of an ongoing struggle for those competing to get on search results pages.

    It’s probably going to be more important than ever to give Google as much information about your site as possible, so that it “understands” it. I would imagine that Google will continue to give webmasters new tools to help with this over time. For now, according to Google (per Sullivan’s report), you don’t need to worry about anything, and Google’s normal SEO guidance remains the same.

    “Not content with taking away the little keyword data we had left this week, Google has again surprised the online marketing industry with a brand new algorithm,” says Econsultancy’s Graham Charlton.

    This is in reference to Google’s move to make the default search experience encrypted for all users, which means that all of the search terms these users use will show up as “not provided” in Google Analytics. Google also recently killed the popular Keyword Tool.

    It’s clear that keywords are becoming less and less important to search engine ranking success as Google gets smarter at figuring out what things mean, both on the query side of things and on the webpage side of things. Luckily, Hummingbird presumably still consists of over 200 different signals that webmasters can potentially take advantage of to gain a competitive edge.

    Thoughts on how Hummingbird will affect your SEO strategy? Share them in the comments.

    Image: Thinkstock

  • Google Hummingbird Algorithm Said To Be Biggest Overhaul Since Caffeine

    Google made a bunch of search-related announcements today. Most of them were discussed in a blog post on the company’s Inside Search blog, but the one that most readers will probably be interested in more than anything was not mentioned there.

    At a press event at the old Google garage, Google revealed that it has implemented a new algorithm called Hummingbird, which is reportedly geared towards helping with complex queries, and already affects 90% of queries.

    According to TechCrunch’s Greg Kumparak, who was presumably at the event, Google didn’t get too detailed in its explanation, but said that it is the biggest overhaul to its engine since Caffeine.

    “The main focus, and something that went repeated many a time, was that the new algorithm allows Google to more quickly parse full questions (as opposed to parsing searches word-by-word), and to identify and rank answers to those questions from the content they’ve indexed,” he writes.

    Google apparently pushed this out about a month ago, so you shouldn’t expect to have your traffic significantly impacted by it if you haven’t already.

    Reuters took this away from the event:

    Google is trying to keep pace with the evolution of Internet usage. As search queries get more complicated, traditional “Boolean” or keyword-based systems begin deteriorating because of the need to match concepts and meanings in addition to words.

    “Hummingbird” is the company’s effort to match the meaning of queries with that of documents on the Internet, said Singhal from the Menlo Park garage where Google founders Larry Page and Sergey Brin conceived their now-ubiquitous search engine.

    And from Forbes contributor Robert Hof:

    Most people won’t notice an overt difference to search results. But with more people making more complex queries, especially as they can increasingly speak their searches into their smartphones, there’s a need for new mathematical formulas the handle them.

    This update to the algorithm focuses more on ranking sites for better relevance by tapping into the company’s Knowledge Graph, its encyclopedia of concepts and relationships among them, according to Amit Singhal, Google’s senior VP of search. Caffeine was more focused on better indexing and crawling of sites to speed results.

    Here’s the real-time Twitter reaction to the Hummingbird news:


    This is pretty much all we know about Hummingbird at this point, but I’m sure it will be discussed a lot more in the coming days. SMX East starts on October 1st, so there will no doubt be plenty of discussion to come out of that.

    Image: Thinkstock