WebProNews

Tag: SEO

  • Google Thinks You’ll Find The New Penguin Update To Be A ‘Delight’

    If the latest comments from Google about a pending Penguin update are to be be believed, webmasters will find the update just delightful. In all seriousness, it does sound like it will be a little better for those affected, but obviously time will tell.

    Do you expect the coming Penguin update to be a delight? Let us know in the comments.

    Last month, Google said a new Penguin update would likely be launched before the end of the year, and now it looks like it may be here as early as next week. Maybe. MAYBE!!!!!! But really, maybe not. But still, maybe!

    This has been the basic narrative coming out of SMX East, whereGoogle webmaster trends analyst Gary Illyes spoke, and reportedly suggesting that the company “may” launch a new update, described as “a large re-write of the algorithm,” sometime next week.

    Barry Schwartz from SMX sister site Search Engine Land covered Illyes’ comments, reporting:

    The new Penguin update will make webmaster’s life “easier a bit” and for most people it will make it a “delight.”

    Gary also said that if you disavow bad links now or as of about two weeks ago, it will likely be too late for this next Penguin refresh. But Gary added that the Penguin refreshes will be more frequent because of the new algorithm in place.

    That’s probably where the “delight” part comes in. Webmasters and SEOs have been very frustrated with the Penguin update not only because of the havoc it wreaked on their sites, but at the time it takes for Google to refresh the data, giving them a shot to get back into the rankings.

    Schwartz’s report went out of its way to repeatedly suggest that the update may or may not come next week, but made it sound like it most likely will. Then, it started to seem less likely. Kind of.

    Illyes seemed to poke fun at his report on Google+, saying, “I love how you guys twist ‘soon’ into this.”

    He did however, also say, “Hey, maybe next week.”

    Schwartz posted a follow up report on is personal blog, saying:

    So what did Gary say on stage yesterday? People don’t believe that it may come next week. He did not say it WILL come next week. He said that based on internal communication from two weeks ago, the decision maker on if and when the algorithm will be pushed live, said that it will be live in weeks. Since that was about two weeks ago, we asked if weeks meant a “few weeks” and if so, that would mean Penguin would be released in a week from now. He said, with a smile, it may be released next week. You can also see Gary’s comments about my coverage on these two posts on Google+. But he clarified, if the tests show issues, then they won’t push it out.

    In the end, when it comes probably doesn’t matter that much because it’s definitely coming, and it’s definitely coming soon. Something tells me you’ll hear about it when it hits.

    What matters more than the when is the what, and it does sound like the what is going to be much more bearable than previous Penguin updates, which have left webmasters and SEOs scratching their heads in anticipation of another refresh that could help them recover lost rankings. It sounds like the long waiting game is about to be eliminated, bringing the Penguin update more on par with the Panda update, which is much more frequently refreshed.

    To get you through until the next wave of Penguin hysteria, here’s what people are saying about Penguin in real time:


    Are you looking forward to the new Penguin? Share your thoughts about it in the comments.

    Note: This article has been updated from its original form to include additional information.

    Image via YouTube

  • Here’s Some Potentially Troubling News About Google Search

    So you know those answer boxes Google has been showing in search results for a while now? The ones that extract text from third-party websites (possibly your own), to answer users’ queries without them having to click through to the site? Well, it looks like they’ve dramatically increased the frequency with which they’re doing this, and at times it’s inaccurate or outdated.

    Is this feature really the best for users? Does it harm webmasters? Let us know what you think in the comments.

    Moz has a new report finding that last week, Google jacked up the number of direct answer boxes it’s showing by as much as 98%. The report includes this graph showing the jump:

    The report says that many of the answers come from Wikipedia, as you’d expect, but Google clearly gets its answers from all kinds of sites. For example, it once turned to phillytown.com for this gem:

    It should be noted that after that one got some attention, they stopped showing a direct answer box for that query, in favor of the classic ten blue links-style results, which are led by Urban Dictionary with an even more disgusting (though not necessarily inaccurate) description.

    I guess Google didn’t want to own that one. This is followed by another Urban Dictionary result for “Upper Decker Double Blumpkin,” which you probably don’t want to read if you’re easily offended. Interestingly, the original phillytown.com result that Google once considered “the answer” to the question, isn’t even in on the first page of results.

    Not to get too far off base here, but the point is that these “answers” could easily come from any number of sources.

    Dr. Peter J. Meyers, the report’s author, says, “Many of these new queries seem to be broad, ‘head’ queries, but that could be a result of our data set, which tends to be skewed toward shorter, commercial queries.”

    He notes that one four-word query with a new answer box was ‘girl scout cookies types’.

    The fact that Google is increasing the number of direct answer boxes so drastically (and will probably continue to do so) is concerning to webmaters as it could mean Google sending them less traffic – particularly for their content that the search engine is actually showing to users. Remember, Google is also extracting data from websites now for its new “structured snippets,” which provide users with bits of information that keep them from having to click through to learn.

    Another concerning angle to Google’s approach is the accuracy of the answers it’s actually displaying.

    Google launched its Knowledge Graph over two years ago. We’ve seen quite a few times that it can provide questionable, outdated, and/or inaccurate results.

    In August, we learned about the “Knowledge Vault,” which is apparently the source of these third-party site-based answer results. From the sound of it, these answers are even riper for outdated and/or inaccurate data.

    Meyers points to one of his own articles that Google uses for one of the direct answers. He notes that the information in question was outdated, as it was an older article. He went in and updated the content to reflect accurate information, but Google hadn’t caught up with it even after two months.

    Google’s Knowledge Graph doesn’t always update as quickly as it should, but this appears to be even worse. Much worse, and that’s troubling considering how much they’re cranking up the volume on this type of search result.

    “At this point, there’s very little anyone outside of Google can do but keep their eyes open,” concludes Meyers. “If this is truly the Knowledge Vault in action, it’s going to grow, impacting more queries and potentially drawing more traffic away from sites. At the same time, Google may be becoming more possessive of that information, and will probably try to remove any kind of direct, third-party editing (which is possible, if difficult, with the current Knowledge Graph).”

    It’s interesting to think about Google becoming “possessive” of information it’s getting from other sites.

    Of course Google isn’t for sites, as the company frequently reminds us. It’s for users. Unfortunately, inaccurate and outdated information isn’t good for them either. Sadly, much of the inaccuracy and outdated information is likely to go unnoticed, as people aren’t likely to dig deeper into other results a lot of the time. The point is, after all, to get users the info they’re looking for without them having to dig deeper.

    Is this the right direction for Google Search to be taking? Share your thoughts in the comments.

    Image via Moz

  • New Google Panda Update Should Help Small Sites

    There’s officially a new Panda update rolling out. If you have a small or medium-sized site, Google is suggesting this could potentially help you, though we’ve heard that before.

    Are you seeing any effects from the new Panda update? Good or bad? Let us know in the comments.

    Google doesn’t typically announce or confirm Panda refreshes these days, but when there’s a major Panda update, they usually let the world know. On Thursday night, Google’s Pierre Far did just that via Google+ update.

    This is exactly what he said:

    Earlier this week, we started a slow rollout of an improved Panda algorithm, and we expect to have everything done sometime next week.

    Based on user (and webmaster!) feedback, we’ve been able to discover a few more signals to help Panda identify low-quality content more precisely. This results in a greater diversity of high-quality small- and medium-sized sites ranking higher, which is nice.

    Depending on the locale, around 3-5% of queries are affected.

    There was a significant Panda refresh suspected earlier this month, but the company didn’t confirm that.

    If you’re keeping track, this is the 27th Panda update. Unofficial algorithm namer and numberer Danny Sullivan is calling it 4.1.

    A lot of people are happy to read Far’s words about small and medium-sized businesses. In fact, one person in WebmasterWorld was actually more impressed with the “which is nice” part. If Google wants small and medium-sized sites to succeed, then how thoughtful of them!

    When Google pushed out Panda 4.0 in May, it was also supposed to benefit small sites and businesses. Google’s Matt Cutts had discussed the update at a conference a couple months prior, and said it should have a direct impact on helping these businesses do better.

    One Googler on his team was said to be specifically working on ways to help small web sites and businesses do better in Google search results. While there was certainly a mix of reactions, there did seem to be more people claiming a positive impact from that update than usual.

    As you might expect, the reactions are mixed once again with the new update.

    If you were hit by Panda in the past, you might see some positive effects with the new update if you made changes that the algorithm likes or even if the new signals pick up on something you have that the algorithm wasn’t picking up on before. Of course it goes both ways. You may have escaped past updates unscathed, and triggered one of the newer signals this time around.

    Now that the new and improved Panda is on the loose, let the speculation about additional signals begin. As a reminder, these are the questions Google listed a few years ago when talking about how it assesses quality in relation to the Panda update:

    Would you trust the information presented in this article?

    Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?

    Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?

    Would you be comfortable giving your credit card information to this site?

    Does this article have spelling, stylistic, or factual errors?

    Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?

    Does the article provide original content or information, original reporting, original research, or original analysis?

    Does the page provide substantial value when compared to other pages in search results?

    How much quality control is done on content?

    Does the article describe both sides of a story?

    Is the site a recognized authority on its topic?

    Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?

    Was the article edited well, or does it appear sloppy or hastily produced?

    For a health related query, would you trust information from this site?

    Would you recognize this site as an authoritative source when mentioned by name?

    Does this article provide a complete or comprehensive description of the topic?

    Does this article contain insightful analysis or interesting information that is beyond obvious?

    Is this the sort of page you’d want to bookmark, share with a friend, or recommend?

    Does this article have an excessive amount of ads that distract from or interfere with the main content?

    Would you expect to see this article in a printed magazine, encyclopedia or book?

    Are the articles short, unsubstantial, or otherwise lacking in helpful specifics?

    Are the pages produced with great care and attention to detail vs. less attention to detail?

    Would users complain when they see pages from this site?

    How (if at all) has Panda impacted your site? Let us know in the comments.

    Image via Wikimedia Commons

  • Google Introduces Structured Snippets

    Google announced the launch of “structured snippets,” a new feature that puts “facts” in the snippets of web results. As with Google’s Knowledge Graph, these facts may or may not be accurate.

    Here’s what they look like:

    The company says, “The WebTables research team has been working to extract and understand tabular data on the Web with the intent to surface particularly relevant data to users. Our data is already used in the Research Tool found in Google Docs and Slides; Structured Snippets is the latest collaboration between Google Research and the Web Search team employing that data to seamlessly provide the most relevant information to the user. We use machine learning techniques to distinguish data tables on the Web from uninteresting tables, e.g., tables used for formatting web pages. We also have additional algorithms to determine quality and relevance that we use to display up to four highly ranked facts from those data tables.”

    “Fact quality will vary across results based on page content, and we are continually enhancing the relevance and accuracy of the facts we identify and display,” Google adds.

    Well, that’s encouraging. Not all of this stuff will necessarily be true, but hopefully more of it will be over time.

    Image via Google

  • New Google Penguin Update Coming, Will Refresh Faster

    Google’s Penguin update is notorious for taking an extremely long time to get refreshed, leaving sites negatively impacted by it out of luck until Google finally pushes a refresh through. You can make all the changes you want in an effort to recover, but if Google doesn’t refresh it, it’s not going to make much difference.

    Google has now offered a couple of pieces of news. The third major Penguin update will be here before the end of the year, and it will start receiving quicker refreshes, which means sites (in theory) should be able to recover more quickly than they’ve been able to in the past.

    Is Google on the right track with the Penguin update? How do you think they’ve handled it thus far? Share your thoughts in the comments.

    Google is working on making the refresh process faster, which should please many webmasters and SEOs if and when this actually occurs.

    Google’s John Mueller talked about this in a Webmaster Hangout on Monday (via Search Engine Roundtable). He also noted that they’re still working on the long-anticipated update (the last refresh was nearly a year ago).

    “We are working on a Penguin update, so I think saying that there’s no refresh coming would be false,” Mueller said. “I don’t have any specific timeline as to when this happens…it’s not happening today, but I know the team is working on this and generally trying to find a solution that refreshes a little bit faster…but it’s not happening today, and we generally try not to give out too much of a timeline ahead of time because sometimes things can still change.”

    Asked if Penguin refreshes will come on a regular basis like those for the Panda update, Mueller said, “We’ll see what we can do there, so that’s something where we’re trying to kind of speed things up because we see that this is a bit of a problem when webmasters want to fix their problems, they actually go and fix these issues, but our algorithms don’t reflect that in a reasonable time. So that’s something where it makes sense to try to improve the speed of our algorithms…Some of you have seen this first hand, others have worked with other webmasters who have had this problem, and I think this is kind of something good to be working on.”

    Asked about the size of the impact of the next update, Mueller said, “That’s always hard to say, and I imagine the impact also depends on your website and whether or not it’s affected…if it’s your website, the impact is always big, right? We’re trying to find the right balance there to make sure we’re doing the right things, but sometimes it doesn’t go as quickly as we’d all like.”

    Mueller hinted in another Hangout nearly a month ago that the next Penguin wasn’t too far off. He participated in yet another Hangout on Friday morning (via Barry Schwartz), and said that we can expect the next Penguin update (which would be 3.0, for all intents and purposes), by the end of the year.

    That’s at least somewhat of a timeline. We’ve only got three months left. Granted, even this timeline isn’t certain.

    Asked if Penguin 3.0 will launch in 2014, Mueller said, “My guess is yes, but as always, there are always things that can happen in between. I’m pretty confident that we’ll have something in the reasonable future but not today, so we’ll definitely let you know when things are happening.”

    They’ll definitely let us know? That doesn’t seem like Google’s style these days.

    This week, Mueller also confirmed that a Penguin refresh is indeed required for an affected site to recover. Most people probably already knew that or at the very least expected as much, but it’s always nice to have official word from Google.

    That’s also all the more reason for webmasters to anticipate the next update with open arms in some cases.

    Note: This story has been updated to include additional information.

    Do you expect Google to actually roll the update out before the end of the year? Will you welcome a more rapidly-refreshing Penguin? Let us know in the comments.

    Image via YouTube

  • Bing Reveals URL Keyword Stuffing Spam Filtering

    Bing revealed in a blog post this week that it rolled out an update to its algorithm a few months ago that targets URL keyword stuffing. They had alluded to such an update in another recent post.

    Igor Rondel, Principal Development Manager for Bing Index Quality writes:

    Like any other black hat technique, the goal of URL KWS, at a high level, is to manipulate search engines to give the page a higher rank than it truly deserves. The underlying idea unique to URL KWS relies on two assumptions about ranking algorithms: a) keyword matching is used and b) matching against the URL is especially valuable. While this is somewhat simplistic considering search engines employ thousands of signals to determine page ranking, these signals do indeed play a role (albeit significantly less than even a few years ago.) Having identified these perceived ‘vulnerabilities’, the spammer attempts to take advantage by creating keyword rich domains names. And since spammers’ strategy includes maximizing impressions, they tend to go after high value/ frequency/ monetizable keywords (e.g. viagra, loan, payday, outlet, free, etc…)

    Approaches commonly used by spammers, as Rondel lists, include: multiple hosts with keyword-rich hostnames; host/domain names with repeating keywords; URL cluster across same domain, but varied host names comprised of keyword permutations; and URL squatting.

    Rondel notes that not all URLs containing multiple keywords are spam, and that the majority actually aren’t. For this reason, Bing is using its new detection technique in combination with other signals.

    “Addressing this type of spam is important because a) it is a widely used technique (i.e. significant SERP presence) and b) URLs appear to be good matches to the query, enticing users to click on them,” he says.

    Bing isn’t giving out all the details about its detection algorithms to prevent abuse, but does note that it takes into account things like: site size; number of hosts; number of words in host/domain names/path; host/domain/path keyword co-occurrence; percentage of the site cluster comprised of top freqeuncy host/domain name keywords; host/domain names containing certain lexicons/pattern combinations; and site/page content quality and popularity signals.

    Via Search Engine Journal

    Image via Bing

  • Google Updates Webmaster Tools API

    Google announced that it has updated the Webmaster Tools API to make it more consistent with other Google APIs. Those who already use other Google APIs, the company says, should find this one easy to implement.

    According to Google, the updated API makes it easier to authenticate for apps or web services, and provides access to some of the main Webmaster Tools features. This is what you can specifically do with it:

    • list, add, or remove sites from your account (you can currently have up to 500 sites in your account)
    • list, add, or remove sitemaps for your websites
    • get warning, error, and indexed counts for individual sitemaps
    • get a time-series of all kinds of crawl errors for your site
    • list crawl error samples for specific types of errors
    • mark individual crawl errors as “fixed” (this doesn’t change how they’re processed, but can help simplify the UI for you)

    Google has examples for Python, Java, and OACurl. It’s encouraging developers to link to projects that use Google APIs in the comments of its announcement post.

    Image via Google

  • New Google Panda Update Refresh Suspected

    New Google Panda Update Refresh Suspected

    Google hasn’t confirmed that it rolled out a new Panda refresh, but signs point to one hitting on September 5th (Friday), according to Search Engine Roundtable and SEOs.

    Barry Schwartz points to a post in the Google Webmaster Help forum from a webmaster whose site was hit, and shares a response he received from Google, which is standard Panda advice:

    I’d recommend making sure your website has unique, compelling, and high-quality content of its own — not just content from other websites.

    He also points to a tweet from one SEO who says clients previously affected by Panda saw recoveries. Others in the comments on Schwartz’s post seem to support the notion that such an update occurred. One suggests it began in the middle of August, and another said they had multiple clients seeing recoveries on September 6th and 7th.

    Google doesn’t really announce or confirm Panda updates/refreshes like it used to, as they happen much more frequently these days. Earlier this summer, they did confirm a major update, but typically, they’re not going to be so noteworthy.

    As usual, the guidance is to create high quality content.

    Image via Wikimedia Commons

  • Google Makes It Easier For People To Search For Your Content

    Google announced that it’s now showing a new and improved sitelinks search box within search results, which will make it easier to find specific content on third-party websites from Google itself.

    The box is more prominent, and supports autocomplete. Here’s what it looks like for YouTube:

    You can mark up your own site so that Google has the ability to display a similar functionality for your content. Google explains:

    You need to have a working site-specific search engine for your site. If you already have one, you can let us know by marking up your homepage as a schema.org/WebSite entity with the potentialAction property of theschema.org/SearchAction markup. You can use JSON-LD, microdata, or RDFa to do this; check out the full implementation details on our developer site.

    If you implement the markup on your site, users will have the ability to jump directly from the sitelinks search box to your site’s search results page. If we don’t find any markup, we’ll show them a Google search results page for the corresponding site: query, as we’ve done until now.

    More on the markup can be found on Google’s Developers site.

    Image via Google

  • Google Offers ‘Optimize For Bandwidth’ Tool

    Google Offers ‘Optimize For Bandwidth’ Tool

    Google is offering webmasters a way to optimize their sites for bandwidth on Apache and Nginx.

    As the company notes, there are a lot of obstacles on the web when it comes to using less bandwidth, and sites contribute to the problem for a variety of reasons including non-minified code and images that weren’t saved for the web, to name a couple.

    Google offers an optimizing proxy for Chorme, but is now utilizing the same technology in its PageSpeed tool. Google’s Jeff Kaufman writes:

    With Optimize for Bandwidth, the PageSpeed team is bringing this same technology to webmasters so that everyone can benefit: users of other browsers, secure sites, desktop users, and site owners who want to bring down their outbound traffic bills. Just install the PageSpeed module on your Apache or Nginx server [1], turn onOptimize for Bandwidth in your configuration, and PageSpeed will do the rest.

    If you later decide you’re interested in PageSpeed’s more advanced optimizations, from cache extension andinlining to the more aggressive image lazyloading and defer JavaScript, it’s just a matter of enabling them in your PageSpeed configuration.

    Google’s John Mueller notes in a Google+ post, “Setting this up isn’t always trivial, but if your site gets a lot of traffic, looking into optimizations like this can result in noticable performance improvements.”

    More info about setting things up is available here.

    Image via Google

  • Were Your Google Authorship Efforts All For Nothing?

    Google introduced authorship support over three years ago, leading webmasters and anyone concerned with SEO to jump through a new set of hoops to make sure their faces were visible in Google search results, and hopefully even get better rankings and overall visibility in the long run. Now, Google has decided to pull the plug on the whole thing.

    Do you feel that authorship was a waste of time? Are you glad to see it go? Is Google making the wrong move? Share your thoughts in the comments.

    To be fair, Google called its authorship efforts experimental in the first place, but for quite a while, it looked like it would play more and more of a role in how Google treated search results, and more specifically, the people providing the content that populates them. Of course Google seems to be relying much less on people (at least directly) for search result delivery these days, favoring on-page “answers” over links to other sites.

    Google never came right out and said it would use authorship as a ranking signal to my recollection, but it did go out of its way to really encourage people to take advantage, recording multiple videos on various ways to implement authorship markup on your website. As time went on, they added more ways to implement it, sending a signal that doing so would be in your best interest.

    They also added features, such as display of comments, circle counts, etc. They added authorship click and impression data to Webmaster Tools. They dropped the author search operator in Google News in favor of authorship. They added authorship to Google+ Sign-In less than a year ago. It seemed that Google was only valuing authorship more as time went on.

    A year ago, Google’s Maile Ohye said, “Authorship annotation is useful to searchers because it signals that a page conveys a real person’s perspective or analysis on a topic.” Emphasis added.

    Also last summer, Google’s Matt Cutts said, “I’m pretty excited about the ideas behind rel=’author’. Basically, if you can move from an anonymous web to a web where you have some notion of identity and maybe even reputation of individual authors, then webspam, you kind of get a lot of benefits for free. It’s harder for the spammers to hide over here in some anonymous corner.”

    “Now, I continue to support anonymous speech and anonymity, but at the same time, if Danny Sullivan writes something on a forum or something like that I’d like to know about that, even if the forum itself doesn’t have that much PageRank or something along those lines,” he added. “It’s definitely the case that it was a lot of fun to see the initial launch of rel=’author’. I think we probably will take another look at what else do we need to do to turn the crank and iterate and improve how we handle rel=’author’. Are there other ways that we can use that signal?”

    Before that, he had indicated that authorship could become more of a signal in the future, dubbing it a “long term trend.”

    At some point, something changed. Google started making reductions to how it used authorship rather than adding to it. Last fall, Cutts announced that Google would be reducing the amount of authorship results it showed by about 15%, saying that the move would improve quality.

    In June, Google announced it was doing away with authors’ profile photos and circle counts in authorship results, indicating that doing so would lend to a “better mobile experience and a more consistent design across devices.”

    But even then, results would still show a byline and contain a link to the author’s Google+ profile.

    Last week came the death blow. Google’s John Mueller announced that the company had made “the difficult decision” to stop showing authorship in search results, saying that the information wasn’t as useful to users as it had hoped, and that it could “even distract from those results”. Emphasis added.

    You know, because knowing more about a result – like who wrote it – is less useful.

    According to Mueller, removing authorship “generally” doesn’t seem to reduce traffic to sites, though you have to wonder if that’s the case for more well-known authors who stand to be affected by this the most. Mueller wrote:

    Going forward, we’re strongly committed to continuing and expanding our support of structured markup (such as schema.org). This markup helps all search engines better understand the content and context of pages on the web, and we’ll continue to use it to show rich snippets in search results.

    It’s also worth mentioning that Search users will still see Google+ posts from friends and pages when they’re relevant to the query — both in the main results, and on the right-hand side. Today’s authorship change doesn’t impact these social features.

    As Search Engine Land’s Danny Sullivan explains, just because authorship is now dead, that doesn’t mean “author rank” is.

    Cutts said earlier this year that Google uses author rank in “some ways,” including in the In-Depth Articles section. Google’s Amit Singhal has also suggested that the signal could come into play more in the future in terms of regular organic search results.

    Cutts said this late last year: “We are trying to figure out who are the authorities in the individual little topic areas and then how do we make sure those sites show up, for medical, or shopping or travel or any one of thousands of other topics. That is to be done algorithmically not by humans … So page rank is sort of this global importance. The New York times is important so if they link to you then you must also be important. But you can start to drill down in individual topic areas and say okay if Jeff Jarvis (Prof of journalism) links to me he is an expert in journalism and so therefore I might be a little bit more relevant in the journalistic field. We’re trying to measure those kinds of topics. Because you know you really want to listen to the experts in each area if you can.”

    Sullivan also points to an excerpt from Google Executive Chairman Eric Schmidt’s 2013 book The New Digital Age, which says: “Within search results, information tied to verified online profiles will be ranked higher than content without such verification, which will result in most users naturally clicking on the top (verified) results. The true cost of remaining anonymous, then, might be irrelevance.”

    The point to all of this is that even though so-called “authorship” is dead, it still matters to Google who you are, and that could have a much bigger impact on your visibility in the search engine than authorship itself ever did.

    But still, what a big waste of time, right? And how did Google go from thinking authorship information was so useful a year ago to finding it useless now?

    What do you think? Should Google have killed authorship? Do you believe the reasoning the company gave? Let us know in the comments.

    Image via Google+

  • Google Just Killed Authorship Entirely

    Google Just Killed Authorship Entirely

    Google announced that it is no longer using authorship markup or displaying author information in search results, saying that it just wasn’t as useful as expected.

    Actually, it was Google’s John Mueller who announced the change on his personal Google+ page rather than on any official Google blog, which seems odd for something like this that Google pushed on users a great deal a couple years ago. Mueller writes:

    I’ve been involved since we first started testing authorship markup and displaying it in search results. We’ve gotten lots of useful feedback from all kinds of webmasters and users, and we’ve tweaked, updated, and honed recognition and displaying of authorship information. Unfortunately, we’ve also observed that this information isn’t as useful to our users as we’d hoped, and can even distract from those results. With this in mind, we’ve made the difficult decision to stop showing authorship in search results.

    (If you’re curious — in our tests, removing authorship generally does not seem to reduce traffic to sites. Nor does it increase clicks on ads. We make these kinds of changes to improve our users’ experience.)

    He goes on to note that Google will continue to expand support of structured markup like schema.org, and use it to show rich snippets in search results. He also says the changes won’t affect users seeing Google+ posts from friends and pages in search results or publisher markup.

    Asked in the comments if Google will still be using authorship data behind the scenes, and whether or not people should remove the code from their pages, Mueller said, “No, we’re no longer using it for authorship, we treat it like any other markup on your pages. Leaving it is fine, it won’t cause problems (and perhaps your users appreciate being able to find out more about you through your profile too).”

    Asked if there is no longer any value to showing Google (via interlinking with the Google+ profile) what pieces of work have been published online, Mueller responded, “Well, links are links, but we’re not using them for authorship anymore.”

    Some obviously feel like they’ve jumped through various hoops Google has thrown at them, only for it all to have been a waste of time. It’s still not exactly clear why taking it away makes search results more useful.

    Here’s Mueller’s full post:


    Image via Google+

  • Is Google Making Too Many Changes To Search?

    Google celebrated the ten year anniversary of its initial public offering this week, and reflected on the past decade of search and some of the major changes it has made.

    Everybody knows that Google is constantly changing its algorithm. They’ve said in the past that they make changes every day (sometimes multiple changes). It would appear, however, that the changes Google makes (algorithmic and otherwise) are only getting more rapid.

    Do you think Google makes too many changes to its search algorithm and other features?Are results getting better or worse? Let us know what you think.

    Amit Singhal, who runs search at Google, took to Google+ to talk about how far the search engine has come over the past ten years. The highlights he mentioned as the “biggest milestones” of the past ten years include: Autocomplete, Translations, Directions and Traffic, Universal Search, Mobile and New Screens, Voice Search, Actions, The Knowledge Graph, “Info just for you,” and “answers before you ask.”

    Some of this stuff is indeed truly remarkable. I certainly wouldn’t want to go back to the time before instant search results. For that matter, the omnibox in Chrome was a revolutionary change in my opinion, and isn’t even mentioned.

    Some of the features have been more controversial. Knowledge Graph, for one, has kept who knows how many clicks away from third-party sites. A lot of people aren’t too happy with the way search has evolved to keep people on Google rather than sending them to relevant sites as it originally did. Either way, I wouldn’t expect Google to reverse course on that anytime soon.

    Singhal also mentioned that Google made over 890 “improvements” to search last year.

    “The heart of Google is still search,” he wrote. “And in the decade since our IPO, Google has made big bets on a range of hugely important areas in search that make today’s Google so much better than the 2004 version (see our homepage from back then below). Larry has described the perfect search engine as understanding exactly what you mean and giving you back exactly what you want. We’ve made a lot of progress on delivering you the right answers, faster. But we know that we have a long way to go — it’s just the beginning.”

    “We made more than 890 improvements to Google Search last year alone, and we’re cranking away at new features and the next generation of big bets all the time,” he said. “We’ve come a long way in 10 years — on Google and so many other general and specialized search apps, it’s now so much better than just the 10 blue links of years past. In 2024, the Google of 2014 will seem ancient, and the Google of 2004 prehistoric.”

    While some webmasters wouldn’t entirely agree that everything Google done has been an improvement, I have to say, going back to the 2004 Google as a user might be a little annoying.

    The rate at which Google functionality and its algorithm is changing is growing at an astonishing rate though, and you have to wonder just what percentage of the changes are for the best. Obviously that’s subjective, but it makes you think.

    As Barry Schwartz at Search Engine Land notes, Google said in the past that it made 350 to 400 changes in 2009. In 2010, it was 550. Last year, it was nearly 900. Perhaps this year they’ll break 1,000.

    Not only are they changing things more frequently, but they’ve become less transparent about the changes they’re making. Google has always guarded its true secret sauce carefully, but at one point, they decided it might not be a bad thing to give people a closer look at the types of changes they were making.

    There for a while, Google was providing regular updates highlighting a lot of the changes they were making. This, the company said, was in the interest of transparency. It was actually pretty interesting, as you could see specific themes that Google was focusing on from month to month. For example, there was a time when many of the changes it was making were related to how the algorithm understands synonyms. You could sometimes see patterns in Google’s focus.

    As time went on, the lists became less regular, and eventually, they just stopped coming entirely, without a word of explanation. Finally after many months, Google said they stopped doing it because people were bored by them. People disagreed, but the updates never came back. That appears to be the end of it.

    These days, we’d be looking at some pretty long lists. Unfortunately, as Google is amping up its change frequency, the lists would be more helpful (and transparent) than ever for understanding Google’s approach to search.

    Sure, Google does make public certain changes. They recently announced that HTTPS will now be considered a ranking factor in its algorithm, for example. Okay, so that’s one out of probably nearly a thousand changes it’s making this year.

    Occasionally, they’ll blog about new features they’ve implemented, but most of how Google is evolving is kept in the dark, open for guesswork and third-party analysis.

    Google said a couple years ago that it runs about 20,000 search experiments a year. From time to time, bloggers pick up on some of these experiments, and we learn about them, but you’re not hearing about 20,000 of them in a year’s time. If Google is making more actual changes, you have to wonder if they’re running an increasing number of experiments, or just implementing more of them.

    When it comes to search engine optimization change comes with the territory. The game has always been in a constant state of change. Still, Google appears to be making things more difficult than ever.

    But Google isn’t out to please webmasters and SEOs anyway. They want to make users happy (or so they say, anyway). The question is, are all these changes really making for a better experience?

    What do you think? Share your thoughts in the comments.

    Images via Google

  • Google Goes After More Link Networks

    Google Goes After More Link Networks

    Google continues to penalize link networks in Europe, hitting two more over the weekend.

    Google has been on a warpath for the last year or so, penalizing link networks designed to game search results. Google now now hit one in Spain and one in Germany.

    Typically, Google’s Matt Cutts would make mention of the networks on Twitter, but he’s currently on leave from work, and probably has better things to do. Johannes Mehlem tweeted about it in his absence:

    As others have pointed out, Germany is obviously part of Europe, so that would be two European networks. According to RustyBrick, Google sent a bunch of manual penalty emails in Spain.

    Last month, Google said it took action on networks in Poland, which the company said it was focusing on all the way back in February. Since then, it went after various other networks in Europe as well as Japan.

    Image via Google

  • New Google Penguin Update Is Getting Closer

    It’s looking like we’ll probably be seeing Google launch a new version of the Penguin update before too long, though Google still won’t give an exact timeframe. Either way, they’re working on it, and it’s coming.

    Google webmaster trends analyst John Mueller, who regularly participates in Google hangout conversations with webmasters, hinted that an update is probably not too far off.

    Here’s the video. He starts discussing it at 21 minutes and 40 seconds in:

    “At the moment, we don’t have anything to announce,” he says. “I believe Panda is one that is a lot more regular now, so that’s probably happening fairly regularly. Penguin is one that I know the engineers are working on, so it’s been quite a while now, so I imagine it’s not going to be that far away, but it’s also not happening this morning…”

    Barry Schwartz at Search Engine Roundtable, who pointed out Mueller’s comments, suggests Penguin 3 could be happening as early as today, saying, “All the tracking tools are going a bit nuts the whole week,” and pointing to some webmaster forum chatter and speculation.

    Of course, such chatter and speculation runs rampant pretty much all the time, so I wouldn’t put a whole lot of stock into that particular aspect, but it does look like if it’s not coming immediately, it’s in the cards soon.

    It has, after all, been over a year since Google pushed Penguin 2.0. Even 2.1 (or whatever you want to call it), which was bigger than the average refresh, was in October. There almost has to be one soon at this point.

    Image via YouTube

  • Google Notifies Webmasters About Their Annoying Faulty Redirects

    Google announced a couple months ago that it would start calling out sites in mobile results for faulty redirects. This was aimed at saving users the “common annoyance” of tapping a search result only to be redirected to a site’s mobile homepage.

    This happens when a site isn’t properly set up to handle requests from smartphones. As Google noted, it happens so frequently that there are actually comics about it.

    Now, webmasters are getting notifications from Google Webmasters Tools indicating when their sites are guilty of this. Here’s one Marie Haynes tweeted out:

    This was reported on earlier by Search Engine Land, which also reports that Google has added a new color-coded syntax to the Fetch as Google feature within Webmaster Tools, which should make things a little easier at times.

    Image via Google

  • What Google’s New Ranking Signal Means For You

    This week, Google introduced the webmaster and SEO world to a new ranking signal. Webmasters using HTTPS (HTTP over TLS, or Transport Layer Security) to make their sites more secure will be looked upon more favorably than those that don’t in Google’s search engine.

    Do you think Google should use HTTPS as a ranking signal? If so, do you think it should be weighted heavily? Share your thoughts in the comments.

    That’s not to say that HTTPS trumps everything else. In fact, the company indicated that it’s a pretty weak signal, at least for now. You can expect it to grow in importance over time.

    In a blog post, Google webmaster trends analysts Zineb Ait Bahajji and Gary Illyes said, ” For now it’s only a very lightweight signal — affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content — while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.”

    As you may know, Google’s algorithm uses over 200 ranking signals to determine what search results to show users. Even if this one is lightweight, just how far down on that list the signal actually falls in terms of significance is anybody’s guess. Perhaps it’s not yet one of the most important, but many of the other signals are no doubt lightweight as well. And it’s not often that Google flat out says that any particular signal will likely increase in weight, so this is something all webmasters better pay attention to.

    “Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Google Drive, for example, automatically have a secure connection to Google,” write Bahaji and Illyes. “Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites.”

    That refers to Google’s Webmaster Help for Hacked Sites site, which helps those who have been hacked get back on track.

    At Google I/O (its developer conference) this summer, Google gave a presentation called HTTPS Everywhere, giving “a hands-on tour of how to make your websites secure by default”. If making your site secure wasn’t a good enough reason to watch that, perhaps Google making it a ranking signal will make it worth your while. You can view it in its entirety here:

    Google says it has seen a lot more webmasters adopting HTTPS, and has already been testing it as a ranking signal with positive results.

    If your site is already serving on HTTPS, you should be in good shape (as long as your whole site is on it), but you’re encouraged to test its security level and configuration using this tool.

    If you’re looking to adopt HTTPS for your site, these are the basic tips for getting started straight from Google:

    • Decide the kind of certificate you need: single, multi-domain, or wildcard certificate
    • Use 2048-bit key certificates
    • Use relative URLs for resources that reside on the same secure domain
    • Use protocol relative URLs for all other domains
    • Check out our Site move article for more guidelines on how to change your website’s address
    • Don’t block your HTTPS site from crawling using robots.txt
    • Allow indexing of your pages by search engines where possible. Avoid the noindex robots meta tag.

    I’d also advise you to read through this Google help document on the subject, and stay tuned to Google’s Webmaster Blog, as it says it will be talking more about best practices over the coming weeks.

    Google webmaster trends analyst Pierre Far responded to some concerns in a Hacker News thread on the subject. One concern was that Google will treat the HTTP and HTTPS versions of a domain as separate properties.

    “That’s not quite accurate,” he says. “It’s on a per-URL basis, not properties. Webmaster Tools asks you to verify the different _sites_ (HTTP/HTTPS, www/non-www) separately because they can be very different. And yes I’ve personally seen a few cases – one somewhat strange example bluntly chides their users when they visit the HTTP site and tells them to visit the site again as HTTPS.”

    Another concern was that even if you 301 every HTTP to HTTPS when you transition, all of your current rankings and PageRank will be irrelevant. According to Far, this is simply “not true.”

    He responded, “If you correctly redirect and do other details correctly (no mixed content, no inconsistent rel=canonical links, and everything else mentioned in the I/O video I referenced), then our algos will consolidate the indexing properties onto the HTTPS URLs. This is just another example of correctly setting up canonicalization.”

    Far, who was involved with the signal’s launch, also weighed in to address what he says is a “very common misconception”:

    Some webmasters say they have “just a content site”, like a blog, and that doesn’t need to be secured. That misses out two immediate benefits you get as a site owner:

    1. Data integrity: only by serving securely can you guarantee that someone is not altering how your content is received by your users. How many times have you accessed a site on an open network or from a hotel and got unexpected ads? This is a very visible manifestation of the issue, but it can be much more subtle.

    2. Authentication: How can users trust that the site is really the one it says it is? Imagine you’re a content site that gives financial or medical advice. If I operated such a site, I’d really want to tell my readers that the advice they’re reading is genuinely mine and not someone else pretending to be me.

    On top of these, your users get obvious (and not-so-obvious) benefits.

    If your site is in Google News, and you’re concerned about how switching might impact that, Barry Schwartz got this statement from Google’s John Mueller: “I checked with the News folks — HTTPS is fine for Google News, no need to even tell them about it. If you do end up noticing anything, that would (most likely) be a bug and something worth letting the Google News team know about. A bunch of sites are on HTTPS in Google News, it would be great to have more.”

    As Schwartz points out, however, some have had issues with switching when it comes to support from Google’s Change of Address Tool in Webmaster Tools.

    “In short, when you do a URL change in Google, from one URL to another, i.e. HTTP to HTTPS, you want to use the Change Of Address Tool, as the Google documents clearly say. But it simply does not work from HTTP to HTTPS within Google Webmaster Tools,” he writes.

    The general reaction to Google turning HTTPS into a ranking signal has been mixed. Many see it as a positive, but others don’t think it should make a difference if the content is relevant. Some have even suggested the change has already negatively impacted their sites, though such claims are questionable this early into the existence of such a “lightweight” signal.

    The fact is, we’re just going to have to wait, and see what happens over time as Google starts to give the signal more weight. By that time, however, it’s probably going to be hard to tell, because it’s doubtful that Google will tell everybody when they crank up the dial.

    Is this the right move for Google’s search results? Let us know what you think.

    Image via Google

  • Google Announces New Ranking Signal

    Google Announces New Ranking Signal

    Google announced that HTTPS is now a ranking signal used in its algorithm. The company has been pushing the use of HTTPS (HTTP over TLS/Transport Layer Security) for quite some time, and called for “HTTPS everywhere” at Google I/O, so this shouldn’t come as much of a surprise.

    The company says it’s seeing more and more webmasters adopting it, and that over the past few months it’s been running tests taking into account whether sites use secure encrypted connections as a signal.

    “We’ve seen positive results, so we’re starting to use HTTPS as a ranking signal,” declares Google in a blog post. “For now it’s only a very lightweight signal — affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content — while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.”

    That’s good to know. Frankly, I’m surprised they even hinted at how much weight they’re giving it.

    Google says it will publish detailed best practices to make adoption easier, and to avoid common mistakes, in the coming weeks. It has some initial tips in the blog post linked above.

    “Keeping users’ data safe is important, and one of the thoughts behind adding HTTPS as a ranking signal in Google’s web-search. HTTPS protects the connection to the website through authentication and encryption,” says Google’s John Mueller.

    Google notes that if you’re already serving on HTTPS, you can test its security level and configuration using this Qualys Lab tool.

    Image via Google

  • MetaFilter Reportedly Recovering From Google Update

    A few months ago, MetaFilter founder Matt Haughney revealed in a blog post that a decline in Google traffic resulting from a 2012 algorithm update had led him to lay off some of the site’s staff. He wrote:

    Today I need to share some unfortunate news: because of serious financial downturn, MetaFilter will be losing three of its moderators to layoffs at the end of this month. What that means for the site and the site’s future are described below.

    While MetaFilter approaches 15 years of being alive and kicking, the overall website saw steady growth for the first 13 of those years. A year and a half ago, we woke up one day to see a 40% decrease in revenue and traffic to Ask MetaFilter, likely the result of ongoing Google index updates. We scoured the web and took advice of reducing ads in the hopes traffic would improve but it never really did, staying steady for several months and then periodically decreasing by smaller amounts over time.

    The long-story-short is that the site’s revenue peaked in 2012, back when we hired additional moderators and brought our total staff up to eight people. Revenue has dropped considerably over the past 18 months, down to levels we last saw in 2007, back when there were only three staffers.

    Google even confirmed that the site was hit by a previously undisclosed algorithm change, which it did not confirm existed back then. Google’s Matt Cutts indicated in May that a solution was on the way.

    While it may have been more like months than weeks, it appears that the solution may have finally come. The site’s traffic is now reportedly back on the rise. Barry Schwartz at Search Engine Roundtable points to data from Searchmetrics suggesting the site’s traffic has nearly recovered.

    So far, we haven’t seen any acknowledgement of this fro Haughey, but apparently some other sties hit at the same time MetaFilter was are recovering as well.

    Image via MetaFilter

  • History of SEO: Evolve or Die. 24 years in 4 minutes.

    Remember when Netscape Navigator and Ask Jeeves ruled the roost? Here’s a 4 minute throwback of how search engines have changed in the last 24 years. Learn how smart marketers figured out how to evolve with the changes.

    Courtesy of Customer Magnetism

  • Did Google Penalize A Site For A Natural Link From Moz?

    Update: We’ve updated the post with some additional comments from Fishkin he gave us via email. See end of article.

    Google has been on a warpath against what it thinks are unnatural links, but many think it’s off the mark with some of them. Meanwhile, the search giant scares people away from using even natural links in some cases, whether it intends to or not.

    Have Google’s warnings to webmasters had an impact on your linking practices? Let us know in the comments.

    When one thinks about reputable companies and websites in the SEO industry, Moz (formerly SEOmoz) is likely to be somewhere near the top of the list. YouMoz is a section of the site that gives voices to other people in the industry who don’t work for the company. It’s essentially a place for guest blog posts.

    YouMoz, while described as a “user generated search industry blog” isn’t exactly user-generated content the same way something like Google’s YouTube is. YouMoz content must be accepted by the Moz staff, which aims only to post the highest quality submissions it receives. This is the way a site is supposed to publish guest blog posts. In fact, Google’s Matt Cutts seems to agree.

    If you’ll recall, Google started cracking down on guest blogging earlier this year. Google made big waves in the SEO industry when it penalized network MyBlogGuest.

    A lot of people thought Google went too far with that one, and many, who either hosted guest blog posts or contributed them to other sites were put on edge. Reputable sites became afraid to link naturally, when the whole point is for links to be natural (isn’t it?).

    Understandably concerned about Google’s view of guest blogging, Moz reached out to Cutts to get a feel of whether its own content was in any danger, despite its clear quality standards. In a nutshell, the verdict was no. It was not in danger. Moz co-founder Rand Fishkin shares what Cutts told them back then:

    Hey, the short answer is that if a site A links to spammy sites, that can affect site A’s reputation. That shouldn’t be a shock–I think we’ve talked about the hazards of linking to bad neighborhoods for a decade or so.

    That said, with the specific instance of Moz.com, for the most part it’s an example of a site that does good due diligence, so on average Moz.com is linking to non-problematic sites. If Moz were to lower its quality standards then that could eventually affect Moz’s reputation.

    The factors that make things safer are the commonsense things you’d expect, e.g. adding a nofollow will eliminate the linking issue completely. Short of that, keyword rich anchortext is higher risk than navigational anchortext like a person or site’s name, and so on.”

    It sounded like YouMoz was pretty safe. Until now. Contributor Scott Wyden got a warning from Google about links violating guideolines, which included his YouMoz article as well as a scraper post (that’s a whole other issue Google should work out).

    “Please correct or remove all inorganic links, not limited to the samples provided above,” Google’s message said. “This may involve contacting webmasters of the sites with the inorganic links on them. If there are links to your site that cannot be removed, you can use the disavow links tool…”

    The problem is that, at least according to Moz, the links were not inorganic.

    “As founder, board member, and majority shareholder of Moz, which owns Moz.com (of which YouMoz is a part), I’m here to tell Google that Scott’s link from the YouMoz post was absolutely editorial,” says Fishkin in a blog post. “Our content team reviews every YouMoz submission. We reject the vast majority of them. We publish only those that are of value and interest to our community. And we check every frickin’ link.”

    “Scott’s link, ironically, came from this post about Building Relationships, Not Links,” he continues. “It’s a good post with helpful information, good examples, and a message which I strongly support. I also, absolutely, support Scott’s earning of a link back to his Photography SEO community and to his page listing business books for photographers (this link was recently removed from the post at Scott’s request). Note that “Photography SEO community” isn’t just a descriptive name, it’s also the official brand name of the site Scott built. Scott linked the way I believe content creators should on the web: with descriptive anchor text that helps inform a reader what they’re going to find on that page. In this case, it may overlap with keywords Scott’s targeting for SEO, but I find it ridiculous to hurt usability in the name of tiptoeing around Google’s potential overenforcement. That’s a one-way ticket to a truly inorganic, Google-shaped web ”

    “If Google doesn’t want to count those links, that’s their business (though I’d argue they’re losing out on a helpful link that improves the link graph and the web overall). What’s not OK is Google’s misrepresentation of Moz’s link as ‘inorganic’ and ‘in violation of our quality guidelines’ in their Webmaster Tools. I really wish YouMoz was an outlier. Sadly, I’ve been seeing more and more of these frustratingly misleading warnings from Google Webmaster Tools.”

    Has Moz lowered its standards in the time that has passed since Cutts’ email? Fishkin certainly doesn’t think so.

    “I can promise that our quality standards are only going up,” he writes, also pointing to an article and a conference talk from the site’s director of community Jen Lopez on this very subject.

    “We’d love if Google’s webmaster review team used the same care when reviewing and calling out links in Webmaster Tools,” Fishkin writes.

    Burn.

    Cutts would most likely have something to say about all of this, but he happens to be on leave, and isn’t getting involved with work until he comes back. He has been on Twitter talking about other things though. It will be interesting to see if he gets sucked back in.

    The whole ordeal should only serve to scare more people away from natural linking as Google has already been doing. If Google is penalizing a site for links from a site like Moz, what’s safe?

    We’ve reached out to Fishkin for further comment, and will update accordingly.

    Update: Fishkin tells us via email that he doesn’t think Google’s targeting of guest blogging in general is off base, but that their reviewers “need to be more discerning in marking problematic links.”

    He goes on to say: “When they select editorial links to highlight as problematic ones, they’re creating a serious problem for site owners on both sides. Correctly identifying non-editorial links really does help site owners improve their behavior, and I know there’s plenty of folks still being manipulative out there.”

    “In terms of Google ruining natural linking, I suspect that’s an unintended side effect of their efforts here. They’re trying to do a good thing – to show which links are cuasing them not to trust websites. But when they mark editorial links as inorganic, they inadvertently scare site owners away from making positive contributions to the web with the accordingly correct citation of their work. That’s how you get a Google-shaped web, rather than a web-shaped Google.”

    Image via Moz

    Do you think Google is going overboard here? Share your thoughts in the comments.