WebProNews

Tag: Search

  • How Much Of Google’s Webspam Efforts Come From These Patents?

    Bill Slawski over at SEO By The Sea, who is always up on search industry patents, has an interesting article talking about a patent that might be related to Google’s new Webspam Update.

    It’s called: Methods and systems for identifying manipulated articles. The abstract for the patent says:

    Systems and methods that identify manipulated articles are described. In one embodiment, a search engine implements a method comprising determining at least one cluster comprising a plurality of articles, analyzing signals to determine an overall signal for the cluster, and determining if the articles are manipulated articles based at least in part on the overall signal.

    The patent was filed all the way back in 2003 and was awarded in 2007. Of course, the new update is really based on principles Google has held for years. The update is designed to target violators of its quality guidelines.

    Patent jargon makes my head hurt, and I’m willing to bet there’s a strong possibility you don’t want to sift through this whole thing. Slawski is a master at explaining these things, so I’ll just quote him from his piece.

    “There are a couple of different elements to this patent,” he writes. “One is that a search engine might identify a cluster of pages that might be related to each other in some way, like being on the same host, or interlinked by doorway pages and articles targeted by those pages. Once such a cluster is identified, documents within the cluster might be examined for individual signals, such as whether or not the text within them appears to have been generated by a computer, or if meta tags are stuffed with repeated keywords, if there is hidden text on pages, or if those pages might contain a lot of unrelated links.”

    He goes on to talk about many of the improvements Google has made to its infrastructure, and spam detecting technologies. He also notes that two phrase-based patents were granted to Google this week. One is for “Phrase extraction using subphrase scoring” and the other, “Query phrasification“. The abstracts for those, are (respectively):

    An information retrieval system uses phrases to index, retrieve, organize and describe documents. Phrases are extracted from the document collection. Documents are the indexed according to their included phrases, using phrase posting lists. The phrase posting lists are stored in an cluster of index servers. The phrase posting lists can be tiered into groups, and sharded into partitions. Phrases in a query are identified based on possible phrasifications. A query schedule based on the phrases is created from the phrases, and then optimized to reduce query processing and communication costs. The execution of the query schedule is managed to further reduce or eliminate query processing operations at various ones of the index servers.

    And…

    An information retrieval system uses phrases to index, retrieve, organize and describe documents. Phrases are extracted from the document collection. Documents are the indexed according to their included phrases, using phrase posting lists. The phrase posting lists are stored in an cluster of index servers. The phrase posting lists can be tiered into groups, and sharded into partitions. Phrases in a query are identified based on possible phrasifications. A query schedule based on the phrases is created from the phrases, and then optimized to reduce query processing and communication costs. The execution of the query schedule is managed to further reduce or eliminate query processing operations at various ones of the index servers.

    If you’re really interested in tech patents and the inner-workings of how search engines work, I’d suggest reading Slawski’s post. I’d also suggest watching Matt Cutts explain how Google Search works.

  • Google Webspam Update: Losers & Winners, According To Searchmetrics [Updated]

    Update: It turns out that Google launched a Panda refresh a few days ago, and Matt Cutts says this is more likely the culprit for Searchmetrics’ lists.

    Danny Sullivan has Cutts’ comment:

    There’s a pretty big flaw with this “winner/loser” data. Searchmetrics says that they’re comparing by looking at rankings from a week ago. We rolled out a Panda data refresh several days ago. Because of the one week window, the Searchmetrics data include not only drops because of the webspam algorithm update but also Panda-related drops. In fact, when our engineers looked at Searchmetrics’ list of 50 sites that dropped, we only saw 2-3 sites that were affected in any way by the webspam algorithm update. I wouldn’t take the Searchmetrics list as indicative of the sites that were affected by the webspam algorithm update.

    Google is in the process of rolling out its Webspam update, which the company says will impact about 3.1% of queries in English.

    Whenever Google announces major updates, Searchmetrics usually puts together some data about what it determines to be the top winners and losers from the update, in terms of search visibility. They’ve put out their first lists for this update.

    “There could be other iterations from Google that we’re not aware of at the moment, but Searchmetrics is tracking closely and will update the list accordingly,” a Searchmetrics spokesperson tells WebProNews. “These are the first numbers and Searchmetrics will have more in the future, but we want to stress that the loser list could change in the next few days.”

    “It’s unusual for Google to make major update on a Wednesday,” says Searchmetrics Founder Marcus Tober. “Normally Google makes this kind of updates on a Monday or Thursday. That’s why I assume that in the next days we’ll see more updates and this update is just the beginning. That’s why, all results in the winner and loser tables are marked as preview.”

    “In a first study I took over 50.000 keywords from short-head to medium and low search volume and looked at the current rankings from position 1 to 100,” Tober adds. “So I analyzed 5,000,000 URLs and compared the rankings to last week. In my second study which is not finished yet I take one million keywords to get a complete overview, but this will take more time.”

    Google did say yesterday, it was launching over the next few days.

    Here’s Searchmetrics’ list of biggest losers:

    Webspam losers

    Here’s their list of winners:

    Search Winners

    We’ll be taking a closer look at some of the sites on these lists.

    See also:

    Google Webspam Algorithm Update Draws Mixed Reviews From Users
    Google Webspam Update: Where’s The Viagra?
    Google Webspam Update: “Make Money Online” Query Yields Less Than Quality Result

  • Can Your Site Lose Its Rankings Because Of Competitors’ Negative SEO?

    Rand Fishkin, the well known SEO expert and Founder/CEO of SEOmoz, has challenged the web to see if anyone can take down his sites’ rankings in Google by way of negative SEO – the practice of implementing tactics specifically aimed at hurting competitors in search, as opposed to improving the rankings of one’s own site. Fishkin tells WebProNews about why he’s made such a challenge.

    Do you think negative SEO practices can be effective in hurting a competitors’ rankings, even if that competitor is playing by all of Google’s rules and has a squeaky clean reputation? Let us know what you think.

    First, you’ll need a little background. There’s a thread in the forum Traffic Planet started by member Jammy (hat tip to Barry Schwartz), who talks about an experiment run with the cooperation of another member in which they were successfully able to have a hugely negative impact on two sites.

    “We carried out a massive scrapebox blast on two sites to ensure an accurate result,” Jammy writes. I’m not going to get into all of the details about why they targeted specific sites or even the sites themselves here. You can read the lengthy forum thread if you want to go through all of that.

    The important thing to note, however, is that the experiment apparently worked. BUT, Fishkin maintains that the sites in question weren’t necessarily in the best situations to begin with.

    “In terms of negative SEO on the whole – I think it’s terrible that it could hurt a site’s rankings,” Fishkin said in the forum thread. “That creates an entire industry and practice that no one (not engines, not marketers, not brands) benefits from. Only the spammers and link network owners win, and that’s exactly the opposite of what every legitimate player in the field wants. Thus, I’m wholeheartedly behind identifying and exposing whether Google or Bing are wrongly penalizing sites rather than merely removing the value passed by spam links. If we can remove that fear and that process, we’ve done the entire marketing and web world a huge favor.”

    “I’ve never seen it work on a truly clean, established site,” Fishkin tells WebProNews, regarding negative SEO. He says the examples from the forum “all had some slightly-seriously suspicious characteristics and not wholly clean link profiles already, and it’s hard to know whether the bad links hurt them or whether they merely triggered a review or algorithm that said ‘this site doesn’t deserve to rank.’”

    “If negative SEO can take down 100% clean sites that have never done anything untoward and that have built up a good reputation on the web, it’s more concerning and something Google’s search quality engineers would need to address immediately (or risk a shadow industry of spammers popping up to do website takedowns),” he adds.

    When asked why he would antagonize those who disagree with his view by offering his own sites as targets, Fishkin says, “Two things – one, I’d rather they target me/us than someone else. We can take the hit and we can help publicize/reach the right folks if something does go wrong. Other targets probably wouldn’t be so lucky.”

    Perhaps there should be a Good Guy Rand meme.

    Good Guy Rand (Fishkin)

    “Two – if this is indeed possible, it’s important for someone who can warn the search/marketing industry to have evidence and be aware of it,” says Fishkin. “Since we carefully monitor our metrics/analytics, haven’t ever engaged in any spam and have lines over to some folks who could help, we’re a good early warning system.”

    So what happens if challengers are successful at taking down either SEOmoz or RandFishkin.com?

    “SEOmoz gets ~20% of its traffic from non-branded Google searches, so worst case, we’d see a 20-25% hit for a few days or a few weeks,” Fishkin tells WebProNews. “That’s survivable and it’s worth the price to uncover whether the practice is a problem. Our core values (TAGFEE) dictate that this is precisely the kind of area where we’d be willing to take some pain in order to prevent harm to others.”

    When asked if he’s confident that Google will correct the problem in a timely fashion if he’s proven wrong, Fishkin says, “Fairly confident, though not 100%. I have my fingers crossed it won’t get too messy for too long, but my COO and community manager are a little nervous.”

    Fishkin concludes our conversation with: “I’d say that the evidence on the Traffic Power thread is strong that if a site already has some questionable elements, a takedown is possible. But, it’s not yet proven whether wholly clean sites can be brought down with negative SEO. I hope that’s not the case, but I suspect the hornet’s nest I kicked up will probably answer that for us in the next month or two.”

    Word around the industry is that Google is making SEO matter less, in terms of over-optimization. Google’s Matt Cutts talked about this last month at SXSW, and that discussion had led to a great deal of discussion and speculation as to just what this would entail.

    “The idea,” he said, “is basically to try and level the playing ground a little bit, so all those people who have sort of been doing, for lack of a better word, ‘over-optimization’ or overly doing their SEO, compared to the people who are just making great content and trying to make a fantastic site, we want to sort of make that playing field a little more level.”

    One thing’s for sure though: If negative SEO can truly impact clean sites, that’s not quite the level playing field Google is aspiring to create.

    Fishkin’s experiment is going to be an interesting one to keep an eye on. If SEOmoz can be severely impacted from this, who’s to say your site can’t? Do you think it’s possible? Tell us in the comments.

  • Google Analytics Gets New Site Speed Report, User Timings

    Google announced today that it has added a new site speed report to Google Analytics, called User Timings. The report lets users track custom timings, and shows the execution speed or load time of any hit, event or user interaction.

    “This can include measuring how quickly specific images and/or resources load, how long it takes for your site to respond to specific button clicks, timings for AJAX actions before and after onLoad event, etc. User timings will not alter your pageview count, hence, makes it the preferred method for tracking a variety of timings for actions in your site,” explains Google’s Analytics team in a blog post.

    “To collect User Timings data, you’ll need to add JavaScript timing code to the interactions you want to track using the new _trackTiming API included in ga.js (version 5.2.6+) for reporting custom timings,” Google adds. “This API allows you to track timings of visitor actions that don’t correspond directly to pageviews (like Event Tracking). User timings are defined using a set of Categories, Variables, and optional Labels for better organization. You can create various categories and track several timings for each of these categories. Please refer to the developers guide for more details about the _trackTiming API.”

    Users can check the report under the content section, by clicking “User Timings”. From there, you can select “Explorer,” “Performance” or “Map Overlay” for different views.

    Google announced the actual Site Speed reports last month, including the Site Speed Overview. Now, you can get even more insight into how your pages are performing.

    Google has said flat out that speed is a ranking factor in Google, though Google’s Matt Cutts kind of downplayed how often it really makes a huge difference. So, kind of a mixed message, though from the user experience standpoint, you’ll certainly want your pages performing well. Nobody wants to stick around on a slow site.

  • Will Google Add Google Drive Content To Search?

    Google finally announced the official launch of Google Drive today, after years of anticipation. What do you suppose the chances are that Google will integrate your stored files into search results when relevant? We asked Google if they have such plans. “We have nothing to announce at this time,” a Google spokesperson tells us.

    Not exactly a no.

    Think about how much Google already does to personalize your search results. The launch of Search Plus Your World, earlier this year, is a bold example of this. Google bases relevancy, to some extent, on your personal connections with others. Why not your personal collection of files? What’s more personal than that?

    Google is giving users 5 GB of free storage, and even more if you’re willing to pay (25 GB for $2.49 per month, 100 GB for $4.99 per month and 1 TB for $49.99 per month ). You can store a lot of files for 1TB. A lot of documents (it’s tied to Google Docs). Google wants to organize the world’s information, and it clearly wants to personalize the user’s experience in ways that are relevant to them, so I can’t see why Google wouldn’t integrate this into search in one way or another. Likewise for Google Music.

    Google has spent a significant amount of time, particularly since the launch of Google+, casting its products basically as features of one greater Google. The company’s recently revised privacy policy emphasizes this even more. Google already stores a lot of your stuff, and Google Drive will increase that a great deal for some users. Why not add Google Drive, Google Music, and even Gmail content into the mix, when relevant? That is, if they can get it right.

    That’s really the question. Can they? They were confident enough in Search Plus Your World as getting social search right. But there have been plenty of complaints regarding its impact on relevancy. I can see adding one’s personal files into the mix in such a way as sparking similar criticism.

    It’s not even about privacy. Google already has your files, and with its new privacy policy, it can share data from one Google service to the next. That doesn’t mean they’re sharing it with anybody else. The policy, however, would seemingly make it easy for Google to provide such an offering.

    How well it would be received, would probably be based in large part on how it was implemented. I have to wonder: if Google had launched its personalized Search Plus Your World results more in a Wajam-like fashion, where they’re not as intrusive into the regular search experience, would people have been so critical?

    There’s another personalized search application called Greplin, which kind of does the kind of thing we’re talking about here. It will let you search through Gmail, Facebook, Twitter, Dropbox, LinkedIn, Google Docs and various other services. I can see Google offering something similar based on its own products in which people have files, documents, or conversations in which they may wish to find.

    This is all speculative, of course. It’s a what if scenario at this point. But, what if? Would you find such a service useful or would it just get in the way? Maybe we’ll see one day soon, or maybe it will never happen. What do you think? Wouldn’t that truly be Search Plus YOUR World?

    Google does emphasize Google Drive’s search as one of its key features. Here’s what Google’s announcement says about that:

    Search everything. Search by keyword and filter by file type, owner and more. Drive can even recognize text in scanned documents using Optical Character Recognition (OCR) technology. Let’s say you upload a scanned image of an old newspaper clipping. You can search for a word from the text of the actual article. We also use image recognition so that if you drag and drop photos from your Grand Canyon trip into Drive, you can later search for [grand canyon] and photos of its gorges should pop up. This technology is still in its early stages, and we expect it to get better over time.

  • Gideon Sundback Google Doodle Unzips Your Search Results

    Hey Google, your fly’s open!

    Today’s Google Doodle is a fun interactive celebration of Swedish-American engineer Gideon Sundbäck, best known for his work in the development of the zipper.

    In 1906, Sundbäck got a job at the Universal Fastener Company in Hoboken, NJ. The Universal Fastener Company was launched in order to manufacture a new “clasp locker” device that debuted at the Chicago World Fair in 1893.

    By 1911, Sundbäck had impressed management as well as married the plant-manager’s daughter – all helping him grab the role of head designer. In 1913, he unveiled what we think of as the modern zipper. He spent the next several years making changes to the design, eventually designing the machine to manufacture his new product as well.

    While the zipper didn’t see universal adoption until the 1930s, once it did, well, you know the rest. Can you imagine a world without zippers?

    Today’s Google Doodle celebrates his birthday with a classic coil zipper. Sundbäck, who died in 1954, would have turned 132 today.

    If you click the Google homepage, the zipper will unzip to your search results automatically. If you click and hold on the slider, you can have a little but of fun.

    Other recent Google Doodles include a fabulous Earth Day graphic on Sunday all across the world. Yesterday, Google honored the 30th anniversary of the ZX Spectrum computer as well as St. George’s Day with an awesomely retro Doodle.

  • How Google Ranks Content, According To Matt Cutts

    Google’s Matt Cutts has put out a new Webmaster Help video. This one is particularly interesting and nearly 8 minutes long – much longer than the norm. It goes fairly in depth about how Google crawls content and attempts to rank it based on relevancy. PageRank, you’ll find is still the key ingredient.

    He starts off by talking about how far Google has come in terms of crawling. When Cutts started at Google, they were only crawling every three or four months.

    “We basically take page rank as the primary determinant,” says Cutts. “And the more page rank you have– that is, the more people who link to you and the more reputable those people are– the more likely it is we’re going to discover your page relatively early in the crawl. In fact, you could imagine crawling in strict page rank order, and you’d get the CNNs of the world and The New York Times of the world and really very high page rank sites. And if you think about how things used to be, we used to crawl for 30 days. So we’d crawl for several weeks. And then we would index for about a week. And then we would push that data out. And that would take about a week.”

    He continues on with the history lesson, talking about the Google Dance, Update Fritz and things, and eventually gets to the present.

    “So at this point, we can get very, very fresh,” he says. “Any time we see updates, we can usually find them very quickly. And in the old days, you would have not just a main or a base index, but you could have what were called supplemental results, or the supplemental index. And that was something that we wouldn’t crawl and refresh quite as often. But it was a lot more documents. And so you could almost imagine having really fresh content, a layer of our main index, and then more documents that are not refreshed quite as often, but there’s a lot more of them.”

    Google continues to emphasize freshness, as we’ve seen in the company’s monthly lists of algorithm changes the last several months.

    “What you do then is you pass things around,” Cutts continues. “And you basically say, OK, I have crawled a large fraction of the web. And within that web you have, for example, one document. And indexing is basically taking things in word order. Well, let’s just work through an example. Suppose you say Katy Perry. In a document, Katy Perry appears right next to each other. But what you want in an index is which documents does the word Katy appear in, and which documents does the word Perry appear in? So you might say Katy appears in documents 1, and 2, and 89, and 555, and 789. And Perry might appear in documents number 2, and 8, and 73, and 555, and 1,000. And so the whole process of doing the index is reversing, so that instead of having the documents in word order, you have the words, and they have it in document order. So it’s, OK, these are all the documents that a word appears in.”

    “Now when someone comes to Google and they type in Katy Perry, you want to say, OK, what documents might match Katy Perry?” he continues. “Well, document one has Katy, but it doesn’t have Perry. So it’s out. Document number two has both Katy and Perry, so that’s a possibility. Document eight has Perry but not Katy. 89 and 73 are out because they don’t have the right combination of words. 555 has both Katy and Perry. And then these two are also out. And so when someone comes to Google and they type in Chicken Little, Britney Spears, Matt Cutts, Katy Perry, whatever it is, we find the documents that we believe have those words, either on the page or maybe in back links, in anchor text pointing to that document.”

    “Once you’ve done what’s called document selection, you try to figure out, how should you rank those?” he explains. “And that’s really tricky.We use page rank as well as over 200 other factors in our rankings to try to say, OK, maybe this document is really authoritative. It has a lot of reputation because it has a lot of page rank. But it only has the word Perry once. And it just happens to have the word Katy somewhere else on the page. Whereas here is a document that has the word Katy and Perry right next to each other, so there’s proximity. And it’s got a lot of reputation. It’s got a lot of links pointing to it.”

    He doesn’t really talk about Search Plus Your World, which is clearly influencing how users see content a great deal these days. And while he does talk about freshness he doesn’t really talk about how that seems to drive rankings either. Freshness is great, as far as Google’s ability to quickly crawl, but sometimes, it feels like how fresh something is, is getting a little too much weight in Google. Sometimes the more relevant content is older, and I’ve seen plenty of SERPs that lean towards freshness, making it particularly hard to find specific things I’m looking for. What do you think?

    “You want to find reputable documents that are also about what the user typed in,” continues Cutts in the video. “And that’s kind of the secret sauce, trying to figure out a way to combine those 200 different ranking signals in order to find the most relevant document. So at any given time, hundreds of millions of times a day, someone comes to Google. We try to find the closest data center to them.”

    “They type in something like Katy Perry,” he says . “We send that query out to hundreds of different machines all at once, which look through their little tiny fraction of the web that we’ve indexed. And we find, OK, these are the documents that we think best match. All those machines return their matches. And we say, OK, what’s the creme de la creme? What’s the needle in the haystack? What’s the best page that matches this query across our entire index? And then we take that page and we try to show it with a useful snippet. So you show the key words in the context of the document. And you get it all back in under half a second.”

    As Cutts notes in the intro to the video, he could talk for hours about all of this stuff. I’m sure you didn’t expect him to reveal Google’s 200 signals in the video, but it does provide an some interesting commentary from the inside on how Google is approaching ranking, even if it omits these signals as a whole.

    Google, as Cutts also recently explained, runs 20,000 search experiments a year.

  • Google AdWords For Video: Will This Help Those CPC Numbers?

    Video advertising just got a lot better for small business with Google’s launch of AdWords for Video, which adds YouTube advertising to the AdWords dashboard, enabling businesses to manage search, display and video campaigns from one place. With local being a major element for many of these businesses, advertisers will be happy to know that this includes geotargeting.

    “We have geotargeting capability in AdWords for video so yes, local businesses can focus their advertising on specific geographies,” A Google spokesperson assures us.

    Google compares YouTube advertising to other media:

    Google Compares Video Ads

    According to recently released data from comScore, video ad impressions reached record numbers in March. The firm says video ads accounted for 18.5% of all videos viewed and 1.5% of all minutes spent viewing video online.

    Advertisers only pay when users watch their ads (not when they skip them).

    “With TrueView video ads you only pay when viewers choose to watch your ad so you aren’t charged when viewers skip your ad if they aren’t interested or have already seen your video,” explains Baljeet Singh, YouTube group product manager. “This means your ad budget is focused on viewers interested in your video. By displaying a call-to-action overlay on your video you can talk about a sale or specific offer to your viewers, share more information about your business, or drive traffic to your website.”

    There are four TrueView formats: in-stream, in-search, in-slate and in-display.

    “On average, we’ve found that YouTube video ads drive a 20 percent increase in traffic to your website and a 5 percent increase in searches for your business (Google Campaign Insights, 2011),” says Singh. “With AdWords for video you can find out how viewers are engaging with your brand during and after they watch your ad. You can see how many viewers watched your entire video, visited your website, stayed on your channel to watch another video, or subscribed to your channel, after viewing your ad.”

    Google bought YouTube back in 2006. For years after that, the deal and Google’s monetization fo the property were heavily criticized. While Google has certainly made YouTube more ad-saturated in recent years, this new offering could be the one that really pays off. Apparently, Google even thinks it can generate as much revenue as its search ads.

    Are these ads part of Google’s master plan to boost CPCs? For the last two quarters, Google has revealed declines in CPCs, while Facebook – which is becoming more of a competitor to Google than ever – is apparently destroying Google on that front, though Facebook has yet to unleash its mobile ads, which could have a similar effect on the company’s revenue as Google’s mobile ads have had on its own. Mobile is widely accused of contributing greatly to Google’s CPC decreases.

    During an earnings call earlier this month, Google CEO Larry Page talked about how bullish he is on mobile, and that CPCs will improve. Mobile is exploding in query growth, he said, adding that the formats are just adapting a lot from a “relatively crude base.”

    “Right now, they don’t monetize well,” he said, comparing it to search in the early 2000s.

    People always spend most of their efforts on the major source of traffic, which is desktop, he said. But over time, he said, that will reverse.

    In the meantime, there are a whole lot of YouTube users, both desktop and mobile. There are over 800 million total. That’s a Facebook-like number on its own (well, almost). It will be quite interesting to see how the AdWords For Video element impacts CPCs.

    And how long until AdWords ads creep their way into Google+?

    Of course Google+ is just the social spine of Google anyway, right? And it’s another as of yet untapped batch of real estate.

  • A Different Take On Google Social Search (Coming to Yahoo And Bing Too)

    If you don’t like Google’s approach to social search with Search Plus Your World, you may or may not like Wajam and its newly redesigned Google search experience.

    Wajam has been available as a browser add on for quite some time. It’s been around far longer than Search Plus Your World, adding a personalized, social search experience across your favorite search engines, including Google. If you add it to your browser, you will find socially relevant search results for many of your web searches. However, it does not eliminate the Search Plus Your World experience Google offers, but only adds to it (with private results from your Facebook, Twitter and Google+ networks). And unlike Search Plus Your World, the results are all together in one place, and can be minimized when you don’t want them.

    “We insert the Wajam dashboard above Google’s ‘Search Plus Your World’ results on the right,” explains Wong. “Our dashboard can be minimized or expanded, so it’s really an enhancement to what Google already shows. You get the best of both worlds.”

    On what you can get from Wajam that you can’t get through Google’s own social search experience, Wong says, “Wajam adds private results from Facebook and Twitter, in addition to Google+. You can filter based on social platforms, as well as on type of result (link, photo, video). You can also select which friends you want to see results from by clicking on their profile picture.”

    “Filtering/sorting capabilities are greater than Google’s at this point in time,” he adds.

    In a recent article, we asked just how great a signal social is to search relevancy. It depends on who you ask, but it’s clearly more useful for some types of searches over others.

    “Social can be used for all kinds of web searches, but is most relevant when applied to recommendations,” says Wong. “For example, I enjoy watching the best swing dance videos on Youtube, and since I have a lot of dancer friends who share videos on Facebook, I use Wajam to discover and keep track of new and interesting swing videos that are uploaded.”

    “Social results are useful when you want to find out what your friends think about a certain product, and it also helps you find out which people in your network use a product,” he adds. “By searching for an iPad 2, I can see which friends in my network own an iPad because they talk about it.”

    “The quality of your network makes a big difference in the type of social results you get,” he continues. “If you do not have friends who have knowledge and share links on a topic, you won’t get good results. Which is why the new design can be ‘minimized’ and you can quickly glance to see if there are a lot of results, before opening up the Wajam dashboard.”

    In the end, the strength of social search is in finding recommendations and opinions from your network, in order to get feedback from people you trust, Wong says. “And as we move forward, we’re going to continue exploring how we can better bubble up the most relevant results by analyzing the profile of your friends and what their interests are in order to rank the results.”

    Wong tells us that Wajam is also bringing the new design to Bing and Yahoo this week, and more sites soon.

  • Wajam Makes Personalized Results Less Obtrusive On Google

    Some people like personalized search results based on their social connections. Some don’t. Google thrust Search Plus Your World upon users earlier this year, to mixed reviews. While it has an on/off toggle, it has littered search results with more content based on what people you may be connected via various social networks (like Google+ – not Facebook/Twitter) have shared.

    Last week, I wrote a post asking if social is really a good signal for relevancy? My conclusion is that while it can help in some kinds of searches, is not necessarily helpful for all kinds. While knowing what hotel in Chicago my friend recommends might be useful to me, knowing which article about Mitt Romney he gave a thumbs up to, isn’t necessarily what I’m looking for.

    Google sprinkles in these results, and social signals influence rankings. I’ve written about Wajam’s efforts in personalized/social search more favorably in the past. One thing that is very different about this, that it gives users a designated spot for these kinds of results. You can glance and see if there’s anything interesting there (plus includes Facebook/Twitter connections), but it doesn’t insert them where it thinks they are most relevant throughout the organic search results.

    Today, Wajam announced a new version of its social experience for Google.

    “Learning from hundreds of thousands of users who made hundreds of millions of searches, we redesigned the whole user experience and packaged it in a convenient, unobtrusive new design that removes clutter from your search results page,” says Wajam’s Alain Wong.

    “The new design clearly breaks down number of results, organizing them by links, photos or videos, and lines up specific friends who have commented on the search term you searched,” adds Wong. “This gives you the ability to more easily filter results by specific friends, relevance or time.”

    The new design is rolling out to users today.

  • Facebook Ads Reportedly Destroying Google Ads In CPCs

    Facebook and Google are competitors. Make no mistake. It’s easy to view one as a social network and one as a search engine, but it is much, much more complicated than that. Both companies make money from advertising, and that is one major area where the competition will heat up more and more. Ultimately it’s the main area that matters from a competitive standpoint, outside of perhaps of web identity and engineering talent. They’ve competed plenty in both of those areas too.

    Last week, when Google released its earnings report for Q1, the company revealed a 12% decline YoY in CPCs. SFGate.com ran an article from Business Insider talking about how things are basically moving in the opposite direction for Facebook. Author Matt Rosoff talked to a rep from marketing firm TBG Digital, who said that Facebook CPCs rose 28% over the same amount of time.

    It’s going to be really interesting watching this battle once both companies are public. Facebook’s IPO is coming on May 17, based on recent reports. They’re saying the company could be valued at $104 billion.

    Of course, Facebook is doing a lot of other things marketers will be attracted to. The recent release of the brand timelines is an obvious one (though not every brand is crazy about them). On that note, you might want to take a look at this infographic about what brand pages are posting and how it’s being shared:

    Pandemic Labs Facebook Brand Ads Infographic

    As far as regular Facebook ads, there are certain things you’ll want to consider when running them, beyond just CPC. We recently looked at a study from TBG showing CTR based on days of the week. Check out Saturdays.

    Clickthrough rate, Facebook ads,

    Facebook’s sponsored story ads may be unavoidable for brands relying on their own Facebook updates to drive sales or even engagement. Facebook has not made visibility in the news feed any easier.

    Last week, however, Facebook did make its new Offers product available for local businesses. The service lets businesses put special offers in their fans’ feeds. I wonder how those will stack up compared to video and photo content in terms of likes.

    Facebook is expected to launch mobile ads in the near future, which could be a major new revenue source for the company, though it is mobile, which many believe has been a key factor in Google’s CPC decrease. It will be very interesting to see how that impacts Facebook.

    In terms of the two companies competing for ad dollars, it seems fairly obvious that if Facebook were to launch a proper search engine, which could happen, it could be a major competitor to Google’s bread and butter AdWords business. For some reason, given the fact that Facebook already has so much integration with the rest of the web, it seems just as likely that Facebook would launch its own AdSense-like ad network putting Facebook ads on third-party sites as well. Both the search engine and such a network are only speculative possibilities at this point, but it just makes too much sense for them not to happen if you ask me.

    What do you think? Would you like to see a Facebook search engine? How about a Facebook AdSense?

  • Auxiliary Search? New Google Search Feature Spotted In The Wild

    Is Google about to make it easier for users to find relevant information without ever leaving the search page?

    Reddit user philosyche just spotted what looks like an experimental Google search feature in the wild. Considering it’s legit, the feature would continue Google’s push to provide more direct answers in search results, something that has far-reaching implications for websites around the world.

    The new feature throws relevant information about a query into the white space on the right side of the page. In this screenshot, you can see that a search for “The Beatles” shows a quick summary, along with information about some of their top songs and albums, as well as related searches.

    As you can see, there are plenty of links in this “auxiliary” (my word, not Google’s, obviously) search. The basic biographical info on the band comes from Wikipedia. There are links to specific songs and albums – possibly to Google Play? The related search links would obviously open up new Google searches.

    The reddit user who posted this screencap to the Google subreddit now says that this is gone from their search results. Could it be a rolling update? A search experiment?

    If so, will it only show up for a certain type of query like bands or films?

    As we told you last week, Google runs upwards of 20,000 different search experiments every year – only a fraction of which ever make it to real Google users. Matt Cutts recently said that in 2009, only 585 of these changes ever made it to prime time.

    A feature like this doesn’t seem strange at all, considering Google’s rumored strategy of providing users with more direct answers within search results. You know what I’m talking about – when you search something like “Easter 2012” or “The Dark Knight Rises release date,” here’s what you’l see at the top:

    Of course, this is all considering that the screencap is real. Personally, I don’t see anything like this when I search for The Beatles. Does anybody else see this new feature? Let us know in the comments.

  • New Google Near Match Types For AdWords To Be Available In May

    Last week, following a report from GroupM Search CEO Chris Copeland, published by AdAge, we talked about some improvements in match types coming to Google: “near exact” and “near phrase”. The basic gist is that these new types will allow advertisers to take advantage of misspellings (on the searcher’s part) and plurals.

    At the time, a Google spokesperson told WebProNews, “We actually haven’t announced anything on this and don’t have any more info to share at this time-we frequently beta test new features with agency partners.”

    Now, the features have been officially announced. Google gives the following examples of how things will work:

    1. waterproof sunblock     buy bollard cover     single serving coffee maker
    2. waterpoof sunblock      buy bollard covers    single serve coffee maker

    Right now, only the first row would be considered matches, which would trigger the appearance of an ad, but once the features launch, the second row would be included as well. This should open up the door for a lot more impressions.

    “People aren’t perfect spellers or typists. At least 7% of search queries contain a misspelling, and the longer the query, the higher the rate,” says AdWords Product Manager Jen Huang. “Even with perfect spelling, two people searching for the same thing often use slightly different variations, such as ‘kid scooters’ and ‘kid’s scooter’ or ‘bamboo floor’ and ‘bamboo flooring.’”

    Huang shares a quote from one of the testers on the Inside AdWords blog:

    “Previously we spent a lot of time making sure to include hundreds of versions of brand misspellings and to include plural forms of all our keywords,” said Dana Freund, Senior SEM Manager at GameDuell. “With the improvements to exact and phrase match we don’t have to worry about these keywords anymore. We get more relevant impressions for a smaller number of keywords, and it’s been a significant time saver for us.”

    Google says the changes will actually go live in mid-May, but for the time being (over the coming weeks), they’ll be rolling out controls for advertisers to get prepared. These, Google says, will allow advertisers to adjust keyword matching options, and will be found under “Advanced Settings in Campaign Settings. From that section, you’d go to Keyword Matching Options.

    More on the features soon.

  • Major Google Update Suspected, Yet Again, By A Bunch Of Webmasters

    Once again, webmasters are complaining about what may have been a major update from Google. They’ve taken to the Google Webmaster Help forums to express their grievances, although to be fair, it’s not all bad for everybody. When sites drop, others rise. That’s how it works.

    Barry Schwartz, at Search Engine Roundtable, who wonders if it could be the “overly SEO penalty” Matt Cutts discussed at SXSW last month, points to 11 separate forum threads with complaints. There’s definitely something going on.

    Of course, in these situations, the Panda update is always mentioned. We’ve reached out to Google for more info. Sometimes they respond. Sometimes they don’t. It will most likely be one of the generic “we make changes every day” kind of responses, and we’ll probably have to wait until the end of April or the beginning of May to get the real list of changes Google has made.

    The last time there was a known Panda update, Google went so far as to tweet about it. They know people want to know when this happens. That doesn’t necessarily mean they’ll tweet every time, but I wouldn’t be surprised. This time, no tweet from Google so far.

    For a refresher on the “overly SEO penalty” Schwartz speaks of, read the following:

    Google Is Working On Making SEO Matter Less
    Google Webmaster Central Creator Talks Google’s “New” Google Changes
    New Google Changes: Really A Matter Of Mom And Pop?
    SEO DOs And DONT’S According To Google: Mixed Signals?

    Other things have been costing sites lately. For one, Google’s de-indexing of paid blog/link networks caused a lot of webmasters to get messages from Google about questionable links. This week, Google sent out messages to 20,000 sites informing them that they appeared to be hacked.

    If you’re rankings have fallen, one thing you may want to consider is taking authorship more seriously (and that includes Google+ engagement), though even that appears to be having some issues on the tracking side.

    Last week, we spoke with Dani Horowitz whose site, DaniWeb, has been hit by Google, yet again, after recovering from multiple iterations of the Panda update.

    Not only does Google make changes every day, it runs even more experiments, with subsets of users. Matt Cutts recently talked about how Google runs 20,000 search experiments a year.

    Even more recently, Cutts talked about what will get you demoted or removed from Google’s index.

  • Google Rich Snippet Updates Announced, Author Stats Go Missing

    Update: A Google spokesperson tells WebProNews, “We’ve currently disabled the experimental ‘Author stats’ feature in Webmaster Tools Labs as we work to fix a bug in the way stats are attributed.”

    Google announced that Product Rich Snippets are now supported on a global scale, so businesses around the world can take advantage of them, and stand our more in search results for the products they’re selling – those, which searchers are looking for. Product Rich Snippets had only been available in certain locations until now.

    “Users viewing your site’s results in Google search can now preview information about products available on your website, regardless of where they’re searching from,” said product manager Anthony Chavez on the Google Webmaster Central blog.

    Chavez also announced that Google’s Rich Snippets Testing Tool has also been updated to support HTML input. “We heard from many users that they wanted to be able to test their HTML source without having to publish it to a web page,” he says.

    Rich Snippet Testing Tool

    There’s some interesting discussion in the comments section of Google’s blog post announcing these changes. Some are clearly happy to see the HTML suppor for the tool.

    Coincidentally, this is the second time I’ve written about the Rich Snippets Tool in the last 24 hours. I wrote a big piece on Google’s Authorship Markup and what it means for both authors and Google. In that, I referenced a recent interview Google’s Sagar Kamdar did with Eric Enge at Stone Temple Consulting, as Kamdar had suggested using the Rich Snippets Testing Tool to make sure you have authorship set up correctly.

    As mentioned in that other piece, Google has been providing author clicks and impressions data in Webmaster Tools. Now some are finding that author stats have gone missing. “Thanks for the upgrade 😉 But now the author stats are disappeared,” one user commented on Google’s blog post.

    Some are complaining about it in the WebmasterWorld forums. Sally Sitts, who started a thread, writes:

    I went to check my “Author Stats”, under the “Labs” tab in Google Webmaster Tools. GONE!

    Anyone else?

    In the past, they only gave me credit for about 50% of the pages that I have “fixed up with Google-required special authorship tags”, according to their specifications.

    At the bottom of the “Labs” page, their disclaimer prevails –

    “Webmaster Tools Labs is a testing ground for experimental features that aren’t quite ready for primetime (sic). They may change, break or DISAPPEAR AT ANY TIME.”

    Nothing about the probably of return, however. (sic)

    This was followed by a couple of interesting replies. Lucy24 writes:

    They never got as far as crediting me with anything, although the Rich Snippets Testing Tool (under More Resources) still comes through with “verified author markup”.

    :: mopping brow ::

    The author function definitely still exists. Saw it within the last 24 hours while doing a search. (Not, alas, a search for my own pages.)

    Sgt_Kickaxe writes:

    Lots of changes going on still.

    Did you know that after you verify your authorship with G+ you can UNverify it by removing the markup (you can even close your Google+ profile!) but Google will still give you special search results (including picture)? They forget nothing. That’s something to think about if you’re running a plugin of any sort to handle markup, save your resources and shut it down 🙂

    With regards to the Product Rich Snippets, one reader commented, “WHY are adult-related products not supported for rich snippets ? What is the problem, since there is no picture displayed?Are loveshops, selling perfectly legal items, not worthy of having nice SERPs displayed too ? I find that really unjust.”

    We’ve reached out to Google for comment regarding the missing author stats. We’ll update when we know more.

  • Bing Translator App For Windows Phone Gets Big Upgrade

    Microsoft announced a new Translator App for Windows Phone, powered by Bing.

    “The world is a melting pot of culture and languages,” a spokesperson for Microsoft tells WebProNews. “And now with your trusted Windows Phone, you can face new countries or local restaurants with the Translator App.”

    “The app allows you to translate typed text, voice translation, and even scan with your camera,” he says. “Just point at menus, newspapers or any printed text and the app will scan and overlay the translation.”

    “If you’re traveling in a different country and don’t want to pay roaming fees, just download the language pack you need and you can use the app offline,” he adds.

    Between scan, type and speak mode, you can choose your preference and pin it to your home screen that way.

    “It is one of the shining ways in which we were able to take full advantage of Windows Phone’s innovative design that puts people at the center of the experience,” says Director of Product Management Vikram Dendi on the Bing blog. “You can also pin the app itself to the home screen – to unlock a useful ‘live tile’, helping deliver a translation a day in the language of your choice. Thanks to this feature, you can learn a new language one word at a time.”

    The app is available in the Windows Phone Marketplace for free. If you have an older version fo the app, you’ll be notified of an update soon.

  • Google On What Will Get You Demoted Or Removed From Index

    Google’s Matt Cutts, as you may or may not know, often appears in Webmaster Help videos addressing questions about what Google does (and what it doesn’t do) in certain situations. Usually, the questions are submitted buy users, though sometimes, Cutts will deem an issue important enough to ask the question himself.

    In the lastest video, which Cutts tweeted out on Monday, a user asks:

    “Just to confirm: does Google take manual action on webspam? Does manual action result in a removal or can it also be a demotion? Are there other situations where Google remove content from its search results?”

    Who better to address this question than Google’s head of webspam himself, Matt Cutts?

    Cutts responds, “I’m really glad to have a chance to clarify this, because some people might not know this, although we’ve written this quite a bit in various places online. Google is willing to take manual action to remove spam. So if you write an algorithm to detect spam, and then someone searches for their own name, and they find off-topic porn, they’re really unhappy about that. And they’ll write into Google and let us know that they’re unhappy.”

    “And if we write back and say, ‘Well, we hope in six to nine months to be able to have an algorithm that catches this off-topic porn,’ that’s not a really satisfactory answer for the guy who has off-topic porn showing up for his name,” he says. “So in some situations, we are willing to take manual action on our results. It’s when there are violations of our web spam quality guidelines.”

    You can find those here, by the way.

    “So, the answer to your question is, yes, we are willing to take manual action when we see violations of our quality guidelines,” he says. “Another follow-up question was whether it has to be removal or whether it can be a demotion. It can be a demotion. It tends to be removal, because the spam we see tends to be very clear-cut. But there are some cases where you might see cookie cutter content that’s maybe not truly, truly awful, but is duplicative, or you can find in tons of other places. And so it’s content that is really not a lot of value add – those sorts of things.”

    “And we say in our guidelines to avoid duplicate content, whether it’s a cross-domain, so having lots of different domains with very, very similar or even identical content,” he says. “So when we see truly malicious, really bad stuff, we’re often taking action to remove it. If we see things that are still a violation of our quality guidelines, but not quite as bad, then you might see a demotion.”

    A bad enough demotion might as well be a removal anyway. I’m sure a lot of Panda victims out there have a thing or two to say about that.

    “And then the last question was, ‘Are there other situations where Google will remove content from it search results?’,” continues Cutts. “So, we do reserve the right to remove content for spam. Content can be removed for legal reasons, like we might get a DMCA complaint or some valid court order that says we have to remove something within this particular country.”

    “We’re also willing to remove stuff for security reaons, so malware, Trojan horses, viruses, worms, those sorts of things,” he says. “Another example of security might be if you have your own credit card number on the web. So those are some of the areas that we are willing to take action, and we are willing to remove stuff from our search results. We don’t claim that that’s a comprehensive list. We think that it’s important to be able to exercise judgment. So if there is some safety issue, or of course, things like child porn, which would fall under legal. But those are the major areas that we’ve seen, would be spam, legal reasons, and security. And certainly, the vast majority of action that we take falls under those three broad areas.”

    “But just to be clear, we do reserve the right to take action, whether it could be demotion or removal,” he reiterates. “And we think we have to apply our best judgment. We want to return the best results that we can for users. And the action that we take is in service of that, trying to make sure that we get the best search results we can out to people when they’re doing searches.”

    Speaking of those security concerns, Cutts also tweeted on Monday that Google has sent messages to 20,000 sites, indicating that they may have been hacked. He attributes this to some “weird redirecting.”

  • Google To 20,000 Sites: You May Have Been Hacked

    Google has been sending out a lot of messages to webmasters lately. A lot have been getting them based on questionable links pointing to their sites, in relation to Google’s cracking down on paid blog/link networks.

    Now, over 20,000 sites have received messages from Google for a very different reason: hacking (or the possibility of hacking). Matt Cutts tweeted the following today:

    Is your site doing weird redirects? We just sent a “your site might be hacked” msg to 20K sites, e.g. http://t.co/r9jOkiOm 5 hours ago via Tweet Button ·  Reply ·  Retweet ·  Favorite · powered by @socialditto

    Barry Schwartz at Search Engine Land claims to have seen some related activity. “I’ve personally seen a spike in the number of sites redirecting from their web site to a non-authorized site recently,” he writes. “The webmaster is typically unaware of this redirect because the redirects only occur when someone clicks from Google’s search results to the web site. Typically the site owner doesn’t go to Google to find his web site, the site owner goes directly to the site.”

    It’s unclear if Google’s messages are related, but TheNextWeb recently reported on some hacking that was going on, on some sites, where the hacker was sneaking in and inserting backlinks to his/her own spammy content, and even messing with canonical link elements, tricking Google’s algorithm into thinking the hacker was the originator of content, even though he/she was simply scraping. They were even able to hijack +1’s in search results.

    Google has a help center article in Webmaster Tools about what to do if your site has been hacked. That includes taking your site offline and cleaning it of malicious software, and requesting a malware review from Google.

    “You can find out if your site has been identified as a site that may host or distribute malicious software (one type of ‘badware’) by checking the Webmaster Tools home page (Note: you need to verify site ownership to see this information.),” says Google.

    Google sends out notices to affected sites at the following email addresses: abuse@, admin@, administrator@, contact@, info@, postmaster@, support@ and webmaster@.

    Google bases its identifictions of “badware” on guidelines from StopBadware.org, the company says, though it also uses its own criteria and tools to identify sites that host/distribute badware.

    “In some cases, third parties can add malicious code to legitimate sites, which would cause us to show the warning message,” Google says in the help center. “If you feel your site has been mistakenly identified, or if you make changes to your site so that it no longer hosts or distributes malicious software and you secure your site so that it is no longer vulnerable to the insertion of badware, you can request that your site be reviewed.”

    Google has instructions for cleaning your site here. This involves quarantining the site, assessing the damage, cleaning it up and asking for Google to review it.

  • Google Authorship Can Help “Level The Playing Field” In Search Visibility

    Last summer, Google announced that it would begin supporting authorship markup, or rel=”author”. It’s still in pilot mode, but Google has been making use of it in search results ever since, in increasing numbers, as more web content authors use it.

    No matter how many places you produce content on the web, the idea is that you tie them all back to your Google profile, so Google understands that it’s all coming from you. Among the benefits to authors, is an extra visual link in Google search results – an author photo pointing to that Google profile, when your content appears in the results. It can lend to reputation and increased exposure of your personal brand. It even shows your Google+ circle count. Author info can appear both on Google web search and Google News:

    Google Authorship

    It can help webmasters see how well certain authors are performing as well. In December, Google added author clicks and impressions to Webmaster Tools, so webmasters can see how often author content is showing up in Google search results.

    “If you associate your content with your Google Profile either via e-mail verification or a simple link, you can visit Webmaster Tools to see how many impressions and clicks your content got on the Google search results page,” explained Google at the time.

    Authorship Analytics

    Update: This feature appears to have suddenly gone missing. At this time, we’re unable to determine whether this is temporary or not. We’ve reached out to Google for more info, and will update accordingly.

    Setting Up Authorship

    There are actually 3 different ways to implement authorship markup on your content: original – three-link method (author’s Google profile, author pages and article page link to one another), the two-link method (Google Profile and Content) and the email method (when you have an email address on the site you’re writing for). Sagar Kamdar, Google’s authorship mastermind talked about each of these in an interesting interview with Eric Enge at Stone Temple Consulting. There’s an email verification tool you can use, by the way.

    email verification

    According to Kamdar, the email method might actually get you setup more quickly. “Sometimes authors don’t have the ability to add additional links from the bio portion of their article or they need to request their webmaster to make some tweaks to enable that,” he is quoted as saying. “The email method doesn’t require any modification to the website to get setup, so it is possible that you could get setup a little bit faster for that than the 2 link method. In addition, with email verification, it is far more dependent upon our heuristics and analysis to figure out if content is associated to your Google profile and that’s a science that we are constantly tuning.”

    You can go to your Google Profile, go to “Edit Profile,” scroll down and click on “work,” click the drop down arrow next to “phone,” click on “email,” and put in your address where it says new contact info. Change the visibility of the section from “only you” to “everyone on the web,” click “save,” and click “done editing.”

    Here are a couple videos of Google talking about getting authorship set up:

    Authorship As A Ranking Signal

    In that first one, Google’s Matt Cutts asks, “Will people get higher rankings? Is there a rankings boost for rel=’author’?”

    Google’s Othar Hansson then replies, “It’s obviously early days, so we hope to use this information and any information as a ranking signal at Google. In this case, we want to get information on credibility of authors from all kinds of sources, and eventually use it in ranking. We’re only experimenting with that now. Who knows where it will go?”

    The video was released in August. Obviously a great deal of time has passed since then. We can’t say with 100% certainty that it’s already a ranking factor, but I wouldn’t be surprised. I certainly see a lot of authorship-enabled results in my daily search activity.

    Kamdar actually addresses it in his interview with Enge. Enge brought up the idea that “this will feed into social signals and author authority in the long term.”

    Kamdar responded, “Yes, you could eventually see that type of thing happening.”

    Eventually.

    Google’s most recent monthly list of algorithm changes included a couple of relevant items to this discussion. One was “better indexing of profile pages.”

    “This change improves the comprehensiveness of public profile pages in our index from more than two-hundred social sites.”

    This (if it was really this particular change) seemed to actually give Google profiles less weight in search results. Certain queries that at one point ranked Google profiles higher were showing more relevant profiles ahead of their Google counterparts (like Mark Zuckerberg’s Facebook profile over his Google+ profile).

    Another change in March was listed as “UI refresh for News Universal.” Google described this: “We’ve refreshed the design of News Universal results by providing more results from the top cluster, unifying the UI treatment of clusters of different sizes, adding a larger font for the top article, adding larger images (from licensed sources), and adding author information.”

    Author info was already appearing in Google News, but now, through Google’s Universal results, here is another opportunity for your authorship-enabled Google profile to show up.

    The Doorway to Google+

    There are obvious benefits to authors from enabling authorship for Google. There are, of course, benefits to Google as well. The main one would be increased emphasis on Google+. As Google CEO Larry Page explained during an earnings call last week, there are two parts of Google+: the “social destination” (what most people think of as Google+) and the “social spine,” which is the social layer over the rest of Google’s products – including search.

    Google has already implemented Search Plus Your World this year, which includes increased integration of Google+ into search results. It relies on social connections Google+ users have made with others, to personalize search results based, in part, on those social connections.

    Authorship further integrates Google+ into search results (granted, this was going on ahead of SPYW’s launch). Every time it shows a user a Google profile because of authorship, it is providing another doorway to Google+, the social destination.

    If you go to my Google profile, for example, you’ll see my recent Google+ posts, public +1’s, etc. The Google Profile, which has been around much longer than Google+, still serves as the central part of a Google+ user’s account. This is another reason Google+ should simply be thought of as Google at large.

    Your Google+ As Your Online Identity

    It’s about online identity more than anything else. Kamdar acknowledges this in that interview as well.

    “The main thing that we are trying to address is the faceless nature of the web,” he is quoted as saying. That alone should be a clear indicator just how much of a competitor Google is to Facebook.

    It’s also for that reason that Google is really picky about how authors represent themselves online. At first, Google didn’t even allow pseudonyms on Google+.

    “It was largely an issue of development priorities,” Google’s Vic Gundotra explained at last year’s Web 2.0 summit. “It’s complicated to get this right. It’s complicated on multiple dimensions. One of the complications it’s complicated on is atmosphere. If you’re a woman and you post a photo and Captain Crunch or Dog Fart comments on it, it changes the atmosphere of the product.”

    After a while, Google began allowing for pseudonyms.

    But that’s not the only area where Google has shown stinginess in author representation. Google has actually told people to change their profile pictures if they didn’t feel they were a good representation. We talked about this last year, when my colleague Josh Wolford was asked to change his Google profile picture. Wolford was using an image of himself made up as a Zombie from a Halloween party. This photo:

    Zombie Josh Wolford

    As a matter of fact, it was Kamdar himself, who emailed Wolford to say, “We noticed you’ve set things up correctly on your end. However, while we’re in this limited testing, we’re trying to make sure that we’ve got the best author pictures we can get–is there any way you could have a non-zombie picture for your profile?”

    Kamdar also briefly addressed this issue in his interview with Enge. “The basic criteria is that you are setup correctly, you provide a high quality photo of yourself, and then based on our algorithm when your content shows up, we just try to make sure the photo would be relevant to the user. In terms of timeline, it just depends on the frequency of how often we crawl and index your content which is variable, based on sites. We just follow the natural progression of our crawling and indexing technology and it could be setup in days or it could take weeks.”

    Other Authorship-Related Things To Consider

    There were a few more noteworthy takeaways from Kamdar’s conversation with Enge.

    One is that he (and presumably Google) sees authorship as a way for users to identify the authors they already like when they write about something they’re searching for. To me, this only adds to the “filter bubble”. Readers could be missing out on content from other great authors just because they’re going to the ones they’re familiar with.

    Another is that you should use the Rich Snippets testing tool, which Kamdar suggest using for seeing if you have authorship implemented correctly.

    Finally, it’s ok to link to sites on your Google Profile, which you contribute to, without having authorship set up on those sites. It won’t hurt you in any way, other than keeping your content from that site from appearing with your Google Profile in search results.

    The most important takeaway from all of this, however, is that if you are concerned about your visibility in search results, and you’re creating content on the web, you should be implementing this. From the sound of it, Google is only going to use the info more in ranking going forward. Of course, it also suggests that you’d be wise to use Google+ more as a social tool. Remember, with authorship, Google is showing circle counts, and you’re not going to be in many circles without some level of engagement. Of course, even without the search visibility aspect, engaging in the community is likely to help you on its own.

    The good thing, for many content creators, is that you don’t have to write for a major publication to use it. These days, thanks to blogs, social media and other user-generated content sites, anyone can be a content creator, and the more weight Google gives to authorship, the more authors on all levels will be able to compete for visibility.

  • DaniWeb Hit By Google Again, Following Multiple Panda Recoveries

    IT discussion community site DaniWeb has had a rather hectic year or so. Hit by Google’s Panda update last year, the site has seen a series of ups and downs – hard hits from Google’s algorithm and tremendous recoveries. The site has been hit yet again, and Founder/CEO Dani Horowitz is telling us about what’s going on this time. She’s not sure if it’s the Panda update, though the whole thing just happens to coincide with a recent iteration of it.

    Have you seen traffic increase or decrease since the latest known Panda update? Let us know in the comments.

    DaniWeb is one of those sites, which in the heart of the mad Panda scramble of 2011, seemed to be unjustly hit. It’s a forum with a solid user base, where people can discuss issues related to hardware, software, software development, web development, Internet marketing ,etc. It’s the kind of site that often provides just the right kind of answer for a troubled searcher.

    We did an interview with Horowitz last year, who told us about some of the things she was doing to help the site recover from the Panda trauma. Here’s the interview, or you can click the link for more about that.

    That was in May. In July, Horowitz claimed DaniWeb had made a 110% recovery from Google. In September, Panda appeared to have slapped the site again, causing it to lose over half of its traffic. Shortly thereafter, in early October, Horowitz announced that the site had managed to recover yet again. “Clearly Google admitted they screwed up with us,” she said at the time.

    Now, six months later, DaniWeb has been hit yet again, but this time, Horowitz is taking at least part of the blame.

    PLEASE PLEASE PLEASE RETWEET … I NEED HELP 🙁 http://t.co/asnxaqAB 12 hours ago via web ·  Reply ·  Retweet ·  Favorite · powered by @socialditto

    The tweet links to this Google Groups forum discussion, where Horowitz describes her new issues in great depth, also noting that the site had eventually made a 130% recovery from its pre-Panda numbers. DaniWeb rolled out a new platform, coincidentally at the same time a Panda update was made in March, and she says the site’s been going downhill ever since.

    Horowitz tells WebProNews she’s been “hibernating in a cave the past few months coding the new version of the site.”

    “I do not believe that we were hit by Panda,” she says in the forum post. “Unlike Panda, which was an instantaneous 50-60% drop in traffic literally overnight, we’ve instead had a steady decrease in traffic every day ever since our launch. At this point, we’re down about 45%. We are using 301 redirects, but our site’s URL structure *DID* change. While we’re on an entirely new platform, the actual content is entirely the same, and there is a 1-to-1 relationship between each page in the old system and the new system (all being 301-redirected).”

    Later in the post, she says, “This mess is partially my fault, I will have to admit. As mentioned, we changed our URL structure, and I am 301 redirecting the old URLs to the new URLs. However, we also changed our URL structure last February, right after Panda originally hit. I have to admit that when we first went live, I completely forgot about that. While I was 301 redirecting the old version to the new, I was *NOT* redirecting the old old version to the new for about 72 hours, until I remembered! However, by that time, it was too late, and we ended up with over 500,000 404 errors in Google Webmaster Tools. That has been fixed for quite a few weeks already though.”

    In between those two quotes, she details the observations in Google’s behavior with her site she’s not happy with. The first one:

    If you visit a page such as: http://www.daniweb.com/web-development/php/17 you will see that the article titles have URLs in the format http://www.daniweb.com/web-development/php/threads/420572/php-apotrophe-issue … However, you can also click on the timestamp of the last post to jump to the last post in the article (a url such as http://www.daniweb.com/posts/jump/1794174)

    The /posts/jump/ URLs will 301 redirect you to the full article pages. For example, in this specific example, to http://www.daniweb.com/web-development/php/threads/420572/php-apotrophe-issue/1#post1794174 (the first page of the thread, with an anchor to the specific post).

    The page specifies rel=”canonical” pointing to http://www.daniweb.com/web-development/php/threads/420572/php-apotrophe-issue

    Why then, does the /posts/jump/ URL show up in the Google search results instead of my preferred URL?? Not only am I doing a 301 redirect away from the /posts/jump/ format, but I am also specifying a rel=”canonical” of my preferred URL.

    “I don’t like this at all for a few reasons,” she continues. “Firstly, the breadcrumb trail doesn’t show up in the SERPS. Secondly, there is no reason for Google to be sending everyone to shortened URLs, because now nearly every visitor coming in from Google has to go through a 301 redirect before seeing any content, which causes an unnecessary delay in page load time. Thirdly, the /posts/jump/ URLs all tack on a #post123 anchor to the end, meaning that everyone is being instantaneously jumped halfway down the page to a specific post, instead of getting the complete picture, where they can start reading from the beginning. This certainly isn’t desirable behavior!”

    You can read the post for further elaboration.

    Dani’s second observation:

    After skimming the first 40 or 50 pages of the Google search results for site:daniweb.com, it’s essentially entirely a mix of two types of URLs. Those in the /posts/jump/ format, and links to member profiles. Essentially, two types of pages which are both not what I would consider putting our best foot forward.

    We currently have nearly one million members, and therefore nearly one million member profiles. However, we choose to use the rel=”noindex” meta tag directive on about 850,000 of the member profiles, only allowing those by good contributors to be indexed. I think it’s a happy medium between allowing our good contributors to have their profiles found in Google by prospective employers and clients searching for their name, and not having one million member profiles saturate our search results. We allow just under 100,000 of our 950,000+ member profiles to be indexed.

    However, as mentioned, it just seems as if member profiles are being ranked too high up and just way too abundant when doing a site:daniweb.com, overshadowing our content. This was no the case before the relaunch, and nothing changed in terms of our noindex approach.

    Based on prior experience, the quality of the results when I do a site:daniweb.com has a direct correlation to whether Google has a strong grasp of our navigation structure and is indexing our site the way that I want them to. I noticed when I was going through my Panda ordeal that, at the beginning, doing a site: query gave very random results, listing our non-important pages first and really giving very messy, non-quality results. Towards the end of our recovery, the results were really high quality, with our best content being shown on the first chunk of pages.

    The bottom line, it seems, according to Horowitz, is that Google has “no grasp on the structure” of the site. Once again, you can read her post in its entirety for further details and explanation from Horowitz herself.

    Until the most recent issue, DaniWeb was clearly having a lot of success in the post-Panda world. When asked what she attributes this success to, Horowitz tells WebProNews, “We were at an all-time high in terms of traffic, and there was still constant growth. I definitely don’t think it was just the Panda recovery but all of the other positive SEO changes I made when we were being Pandalized that contributed to our post-Panda success.”

    It goes to show, Panda is just one of many signals Google has (over 200, in fact).

    “I’ve already documented just about everything that I did along the way, so there’s not much that I can think of adding,” she says. You can go back through the other links in these articles for more discussion with Dani about all of that. “At the end of the day, I think it just comes down to Google having a really good grasp of your entire site structure.”

    “Taking yet another massive hit was completely unexpected for us,” she says. “We launched at the exact same time as Panda rolled out (completely not planned), and therefore I don’t know which to attribute our latest round of issues to. It might be Panda, it might be issues with our new version, it might be a little of both, or it might be new signals that Google is now factoring into their algorithm.”

    Google has, of course, been providing monthly updates on many of the new changes it has been making. You can see the list for March here.

    There’s no question that search engines, including Google, are putting a lot more emphasis on social media these days. We asked Horowitz if she believes social media played a significant role in DaniWeb’s search visibility.

    “Absolutely,” she says. “I can definitely see the value in Twitter and Facebook likes, recommendations, and mentions. I think it just all goes into building a solid brand on the web. I forget where I read somewhere recently about how Google is favoring big brands. I don’t think you need to be a fortune 500 company to have earned a reputation for yourself on the web.”

    “While I personally still haven’t quite found the value in Google+, I’m not going to discount it for its part in building brand equity in the eyes of Google, either.”

    When asked if Google’s “Search Plus Your World” has been a positive thing for Daniweb, and/or the Google user experience (it’s received a lot of criticism), she says, “I happen to be a fan of personalized search results. Am I the only one?”

    Do you think Google’s results are better now in the post-Panda, “Search Plus Your World” era? Let us know what you think in the comments.