WebProNews

Tag: Search Quality Raters

  • Google Updates Search Quality Rater Guidelines

    Google Updates Search Quality Rater Guidelines

    Google announced that it has updated its guidelines for search quality raters. The reason behind this (much like the reason for many of the company’s announcements) is the increasing use of mobile devices.

    The company says it recently completed a “major” revision of the guidelines with mobile in mind.

    “Developing algorithmic changes to search involves a process of experimentation,” says Google search growth and analysis senior product manager Mimi Underwood. “Part of that experimentation is having evaluators—people who assess the quality of Google’s search results—give us feedback on our experiments. Ratings from evaluators do not determine individual site rankings, but are used help us understand our experiments. The evaluators base their ratings on guidelines we give them; the guidelines reflect what Google thinks search users want.”

    “In 2013, we published our human rating guidelines to provide transparency on how Google works and to help webmasters understand what Google looks for in web pages,” Underwood adds. “Since that time, a lot has changed: notably, more people have smartphones than ever before and more searches are done on mobile devices today than on computers. We often make changes to the guidelines as our understanding of what users wants evolves, but we haven’t shared an update publicly since then.”

    You can see the update here.

    Google says it won’t update the public document with every little change, but will try to do so for the big ones.

    Image via Google

  • Google Posts Big ‘Search Quality Rating Guidelines’ Document, Says It’s Just The ‘Cliffs Notes’ Version Of The Real Thing

    We’ve seen Google’s search quality raters referenced numerous times, but now Google has made available the whole set of guidelines in one giant PDF for your perusal. The document is called “Search Quality Rating Guidelines,” and interestingly, it’s labeled version 1.0, and is dated November 2012. It was released as part of Google’s new “How Search Works” site.

    “Google relies on raters, working in countries and languages around the world, to help us measure the quality of our search results, ranking, and search experience,” Google explains. “These raters perform a variety of different kinds of “rating tasks” designed to give us information about the quality of different kinds of results in response to different kinds of queries. The data they generate is rolled up statistically to give us within the Google search team a view of the quality of our search results and search experience over time, as well as an ability to measure the effect of proposed changes to Google’s search algorithms. Raters’ judgments do not directly impact Google’s search result rankings. While a rater may give a particular URL a score, that score does not directly increase or decrease a given website’s ranking. Instead these scores are used in aggregate to evaluate search quality and make decisions about changes.”

    In the preface of the document, Google notes that the document itself is not the entire version that raters actually use on a daily basis, but rather a “Cliffs Note” version.

    “The raters’ version includes instruction on using the rating interface, additional rating examples, etc.,” Google explains. “These guidelines are used as rating specifications for search raters, and this document in particular focuses on a core type of rating task called ‘URL rating.’ In a URL rating task, a rater is shown a search query from their locale (country + language) and a URL that could be returned by a search engine for that query. The raters ‘rate’ the quality of that result for that query, on a scale described within the document. Sounds simple, right? As you’ll see, there are many cases to think through, and this document is used to guide raters on some of those cases and how to look at them.”

    In a Webmaster Help video released this past October, Matt Cutts also discussed the quality raters’ “impact” on algorithms.

    Here’s another one they put out in May talking about how Google uses the human raters: