WebProNews

Tag: Search

  • Wajam: We Do Bing Social Better Than Bing

    Wajam recently released a new and improved version of its browser add-on that brings a more social experience to Google. The company has now released the counterpart for Bing and Yahoo.

    Since Wajam released the Google version, of course, Bing has unveiled its own major social overhaul. Wajam thinks it can do it better though. The company put together the following video and chart showing the differences:

    Bing vs Wajam on social

    It’s interesting to see Wajam get the edge on product recommendations, given that Bing has always touted shopping as one of its highlights. Of course, if you use Bing’s regular search and search for products, Bing will still show the social bar, with Facebook likes, and Twitter results if applicable.

    The new Wajam is currently rolling out for Yahoo and Bing.

    For more on Bing’s new features, read here.

  • Facebook Is Slowly Becoming the Reason to Pay the Internet Bill

    Think of all the things that you really don’t want to do without, America: cars, indoor plumbing, birth control, pizza delivery, the Biebs. You wouldn’t enjoy life all that much without any of them, would you?

    If a new summary from Experian Hitwise is any indication, another thing you’d probably suffer separation pangs from is Facebook. It sounds puffed up, but seriously, 9% of all internet visits in the United States during April 2012 went directly to Facebook.com. That’s nearly 1 visit to Facebook for every 10 times you go anywhere on the internet. More, 1 in every 5 page views happened on Facebook. And this isn’t just traffic from growth, either, because 96% of the visitors to Facebook last month were repeat offenders (or, at least, people who’d visited the site within the previous 30 days).

    Facebook welcomed over 1.6 billion visits a week and averaged more than 229 million visits a day in the U.S. year-to-date. For the year, Facebook has received more than 400 billion page views.

    America, there is no help. We. Are. Uh-ddicted.

    People aren’t just peeping in on Facebook, either, because the average time for a visit is 20 minutes. 56% of Facebook visitors are female, although that’s not really surprising because the tea leaves have been telling us for a while that women are dominating all things internet.

    More quick-release facts:

  • “facebook” has been the most searched term in the U.S. since July 18, 2009 (other variations of “facebook” occupy the 3rd, 5th, and 8th places on the list of top ten searched terms in 2011, too, which all totals 6% of all searches).
  • California, Texas, New York, Florida, Illinois, Pennsylvania, Ohio, Michigan, Georgia, and North Carolina all want to elect Facebook as governor. Actually, they don’t (not that I know of) but those ten states do account for 52% of visits to Facebook.
  • The fine residents of West Virginia, Kentucky, Maine, Vermont, Arkansas, Iowa, Indiana, Mississippi, Oklahoma, and Alabama are more likely to visit Facebook than the general online population.
  • Speaking of West Virginia, people living in the state’s capital, Charleston, are more likely to visit Facebook than anybody else.
  • The United States isn’t the only conquest by Facebook. Since this is a global mission, it’s also the top social networking site in Canada, the United Kingdom, Brazil, France, Australia, New Zealand, and Singapore.

    In Canada, New Zealand, Hong Kong, and Singapore, Facebook.com is the most visited site. For now, Facebook is merely (!!) the second most-visited site in the United Kingdom, Brazil, France, and Australia, but if the fervor for the website in other countries is any indication, it’s only a matter of time until Facebook completely takes over the internet in those countries, too.

  • Google On Knowledge Graph’s Wikipedia Integration: We Realize We’ll Never Be Perfect

    This week, Google unveiled its “Knowledge Graph,” which is garnering a great deal of hype in the search industry. Google has certainly hyped it up, calling it “another step away from raw keywords”. It puts a new kind of search results onto Google results pages in boxes on the side, which display relevant information about things like people, celebrities, bands, works of art, books, movies, etc.

    It draws from various data sources, including some of Google’s own, but in all of the examples we’ve seen so far, Wikipedia is displayed as a prominent source of information. With that in mind, we wondered how susceptible to Wikipedia vandalism Google might be.

    When asked about this, a Google spokesperson tells WebProNews, “I can’t share a ton of detail here, but we’ve got quality controls in place to try to mitigate this kind of issue. We’ve also included a link so users can tell us when we may have an inaccuracy in our information.”

    “Our goal is to be useful; we realize we’ll never be perfect, just as a person’s or library’s knowledge is never complete,” he adds. “But we will strive to be accurate. More broadly, this is why we engineer 500+ updates to our algorithms every year — we’re constantly working to improve search, and to make things easier for our users.”

    Additionally, Danny Sullivan, who interviewed Google’s Amit Singhal during a keynote at SMX London this week, talked to Singhal about this issue. Here’s what he had to say, as shared by Sullivan:

    Singhal said that Google will use a combination of computer algorithms and human review to decide if a particular fact should be corrected. If Google makes a change, the source provider is told. This mean, in particular, Wikipedia will be informed of any errors. It doesn’t have to change anything, but apparently the service is looking forward to the feedback.

    “They really are excited about it. They get to get feedback from a much bigger group of people,” Singhal said.

    Wikipedia has 3,951,359 content pages. Wikimedia projects have so far seen over 1.5 billion edits. I would have to imagine the bulk of these have been to Wikipedia.

  • Matt Cutts On The Hardware & Software That Power Googlebot

    Google uploaded a new Webmaster Help video from Matt Cutts, which addresses a question about the hardware/server-side software that powers a typical Googlebot server.

    “So one of the secrets of Google is that rather than employing these mainframe machines, this heavy iron, big iron kind of stuff, if you were to go into a Google data center and look at an example rack, it would look a lot like a PC,” says Cutts. “So there’s commodity PC parts. It’s the sort of thing where you’d recognize a lot of the stuff from having opened up your own computer,and what’s interesting is rather than have like special Googlebot web crawling servers, we tend to say, OK, build a whole bunch of different servers that can be used interchangeably for things like Googlebot, or web serving, or indexing. And then we have this fleet, this armada of machines, and you can deploy it on different types of tasks and different types of processing.”

    “So hardware wise, they’re not exactly the same, but they look a lot like regular commodity PCs,” he adds. “And there’s no difference between Googlebot servers versus regular servers at Google. You might have differences in RAM or hard disk, but in general, it’s the same sorts of stuff.”

    On the software side, Google of course builds everything itself, as to not have to rely on third-parties. Cutts says there’s a running joke at Google along the lines of “we don’t just build the cars oursevles, and we don’t just build the tires ourselves. We actually vulcanize the rubber on the tires ourselves.”

    “We tend to look at everything all the way down to the metal,” Cutts explains. “I mean, if you think about it, there’s data center efficiency. There’s power efficiency on the motherboards. And so if you can sort of keep an eye on everything all the way down, you can make your stuff a lot more efficient, a lot more powerful. You’re not wasting things because you use some outside vendor and it’s black box.”

    A couple months ago, Google put out a blog post discussing its data center efficiency, indicating that they are getting even more efficient.

    “In the same way that you might examine your electricity bill and then tweak the thermostat, we constantly track our energy consumption and use that data to make improvements to our infrastructure. As a result, our data centers use 50 percent less energy than the typical data center,” wrote Joe Kava, Senior Director, data center construction and operations at Google.

    Cutts says Google uses a lot of Linux-based machines and Linux-based servers.

    “We’ve got a lot of Linux kernel hackers,” he says. “And we tend to have software that we’ve built pretty much from the ground up to do all the different specialized tasks. So even to the point of our web servers. We don’t use Apache. We don’t use IIS. We use something called GWS, which stands for the Google Web Server.”

    “So by having our own binaries that we’ve built from our own stuff and building that stack all the way up, it really unlocks a lot of efficiency,” he adds. “It makes sure that there’s nothing that you can’t go in and tweak to get performance gains or to fix if you find bugs.”

    If you’re interested in how Google really works, you should watch this video too:

    Google says the average search query travels as much as 1,500 miles.

  • Knowledge Graph: Google Officially Announces Its “Things” Results

    Google has formally announced the “Knowledge Graph,” its way of providing results about “things”. We’ve reported on the products of this a couple of times, as Google has been testing them.

    An example would be when you search for a band, and Google puts some boxes on the side of the search results page with some specific info about that band. Likewise for movies, actors, books and people. According to the company, it also includes landmarks, cities, sports teams, buildings, geographical features, celestial objects, works of art, and more.

    Beatles on Google

    The main theme of the Knowledge Graph, as Google is presenting it, is that it is making Google smarter and better at giving you answers. Better at distinguishing what you mean by certain queries, which may come with more than one meaning. Googles gives the example of Taj Mahal: “do you mean Taj Mahal the monument, or Taj Mahal the musician? Now Google understands the difference, and can narrow your search results just to the one you mean.”

    Google Taj Mahal

    Google put out the following video talking about it:

    This appears to be the big Google change that was discussed in a popular Wall Street Journal article in March, which we wrote about here. In our take, we talked about how Google is doing more to keep people from having to leave its own pages, by providing more info on them – basically, users have less reasons to click through to other sites. It’s wroth noting, however, that Google SVP, Engineering, Amit Singhal, indicated at SMX London this week, that Google’s Search Plus Your World personalized results are generating greater clickthrough rates for search results.

    According to the WSJ article, Google’s 2010 acquisition of Metaweb plays a significant role in what is now known as the Knowledge Graph.

    Metaweb came with a big open database of 12 million things (including movies, books, TV shows, celebrities, locations, companies and more) called Freebase. There’s more to it than that though.

    “Google’s Knowledge Graph isn’t just rooted in public sources such as Freebase, Wikipedia and the CIA World Factbook,” says Singhal. “It’s also augmented at a much larger scale—because we’re focused on comprehensive breadth and depth. It currently contains more than 500 million objects, as well as more than 3.5 billion facts about and relationships between these different objects. And it’s tuned based on what people search for, and what we find out on the web.”

    I’m guessing there’s some Google Squared in there too.

    “How do we know which facts are most likely to be needed for each item? For that, we go back to our users and study in aggregate what they’ve been asking Google about each item,” explains Singhal. “For example, people are interested in knowing what books Charles Dickens wrote, whereas they’re less interested in what books Frank Lloyd Wright wrote, and more in what buildings he designed.”

    The Knowledge Graph is “gradually” rolling out to U.S. users in English.

  • Knowledge Graph Reduces Google’s Dependence On Keywords

    Earlier this month, we looked at Google’s big list of algorithm changes from April. One of those, referred to as Bi02sw41, indicated that Google may have reduced its dependence on keywords.

    Today, Google announced the Knowledge Graph, which Google is saying makes it smarter at determining what people mean when they’re searching for things. More on the Knowledge Graph here. It also comes in mobile.

    Google is indicating that this is a step away from keywords. In the official announcement, SVP, Engineering, Amit Singhal, says:

    Take a query like [taj mahal]. For more than four decades, search has essentially been about matching keywords to queries. To a search engine the words [taj mahal] have been just that—two words.

    But we all know that [taj mahal] has a much richer meaning. You might think of one of the world’s most beautiful monuments, or a Grammy Award-winning musician, or possibly even a casino in Atlantic City, NJ. Or, depending on when you last ate, the nearest Indian restaurant. It’s why we’ve been working on an intelligent model—in geek-speak, a “graph”—that understands real-world entities and their relationships to one another: things, not strings.

    Google’s head of webspam, Matt Cutts, tweeted about the feature:

    Big search news: http://t.co/ZMiB88BV Moving from keywords toward knowledge of real-world entities and their relationships.
    23 minutes ago via Tweet Button · powered by @socialditto
     Reply  · Retweet  · Favorite

    On Google+, Cutts said, “Google just announced its Knowledge Graph. It’s another step away from raw keywords (without knowing what those words really mean) toward understanding things in the real-world and how they relate to each other. The knowledge graph improves our ability to understand the intent of a query so we can give better answers and search results.”

    Keywords have, of course, been a major point of spam, which Google is working hard to eliminate (see Penguin update). The less Google can rely on keywords to deliver relevant results, the less susceptible to spam it should be.

    I don’t think the Knowledge Graph has done anything to diminish the value of using relevant keywords in your content, and it doesn’t seem to affect the regular, organic web results, but who knows if this will change somewhere down the line.

    It is interesting to see Google continue to clutter up its search results pages, given that its clean design was one of the big differentiators of the search engine in its early days.

  • Bing Out-Googles Google With Clean Results Pages. Like It?

    Bing Out-Googles Google With Clean Results Pages. Like It?

    When Google rose to popularity, ever so long ago, it was considered to be revolutionizing search. Think about what search was like back then. AltaVista, Yahoo, and a bunch of others were competing for your queries, but Google brought a different approach. PageRank was a huge part of that, and is often credited as the big differentiator, but another key element to what made Google stand out was its simplistic design.

    Do you miss the days when Google’s design was simpler? Do you think Bing is outdoing Google in design and usability? Let us know in the comments.

    Google still largely maintains its simplicity on its homepage (though it has a less simplistic homepage option in iGoogle), but in the search results it’s another story. Search engines in the pre-Google days were essentially portals. Google, in its early days, was just fresh, clean search. These days, Google is much more portal-esque, with all of the company’s various products that are not only available from links across the broader Google experience (the top navigation and whatnot), but also injected into the search results in various capacities.

    Much of what Google has on its search results pages these days has been added at different times. It wasn’t all overnight, but Bing’s redesign really shines a spotlight on just how much more complex Google results are these days.

    When Bing launched, we saw Google start doing various things that looked more like what Bing was doing. That’s why it’s interesting to see today that Bing has implemented some Bing changes that are more reminiscent of Google, or at least a past version of Google.

    Here’s what Bing’s results look like now:

    New Bing SERP

    Here are Google’s for the same query:

    Google SERP

    One of the big differentiators of Bing when it was first launched was the left panel of search options, which Google also adopted. Now, the two search engines have essentially reversed in design.

    Of course some Google results are even more cluttered. Look at this one for “hotels”:

    Google SERP

    And that’s before the addition of the new paid inclusion style sponsored listings. You have the Google+ pages being promoted, the maps section, etc. It is very cluttered compared to Bing’s version:

    Bing SERP

    Bing’s blog provides a nice before and after comparison of its new design:

    Bing Before and After

    “Over the past few months, we’ve run dozens of experiments to determine how you read our pages to deliver the link you’re looking for,” says Bing Principal Group Program Manager Sally Salas. “Based on that feedback, we’ve tuned the site to make the entire page easier to scan, removing unnecessary distractions, and making the overall experience more predictable and useful. This refreshed design helps you do more with search—and gives us a canvas for bringing future innovation to you.”

    “The new experience is more than skin-deep,” says Salas. “You will also notice faster page-load times and improved relevance under the hood. After all, our goal is to help people spend less time searching and more time doing. And changing how we look is the next big step in doing just that.”

    Bing’s new design also highlights its use of Facebook (as opposed to Google’s +1s) in a more restrained and appealing way by showing thumbs up next to results that your Facebook friends have liked:

    Facebook likes on Bing

    Josh Costine, who points out the Facebook-related tweaks, reports that the changes are fully rolling out.

    Bing’s integration with Facebook is nothing new, but it’s likely that Bing will continue to look for ways to make it more useful, and this may appeal to users who have been on Facebook for years, establishing and cultivating relationships, who just aren’t getting the same kind of engagement on Google+ (if they’re using it at all).

    The whole design element is kind of funny, considering that Google has historically taken a simplistic approach to design, which is still evident on its home page. It really illustrates how Google has evolved away from this approach on the results pages though, for better or worse.

    Google is packing a lot of features into search results these days, however. Do you think this makes the pages more usable, or too cluttered? Let us know what you think. What do you think about Bing’s changes?

    Bing says it is also testing out new ideas for its homepage, including a larger version of its daily image.

  • Rand Fishkin’s Negative SEO Challenge: 40K Questionable Links And Ranking Well

    Last month, we reported that SEOmoz CEO Rand Fishkin issued a negative SEO challenge. He challenged people to take down SEOmoz or RandFishkin.com using negative SEO tactics.

    “I’ve never seen it work on a truly clean, established site,” Fishkin told us at the time. He is confident enough in his sites’ link profiles and reputation. He also said, “I’d rather they target me/us than someone else. We can take the hit and we can help publicize/reach the right folks if something does go wrong. Other targets probably wouldn’t be so lucky.”

    We had a conversation with Fishkin today about the Penguin update, and about a new SEOmoz project related to webspam. We also asked for an update on how the challenge is going, and he said, “On the negative SEO front – I did notice that my personal blog had ~40,000 more links (from some very questionable new sources) as of last week. It’s still ranking well, though!”

    It sounds like the the challenge is working out so far, which certainly looks good on Google’s part, especially in light of the Penguin update, and the opinions flying around about negative SEO. Just peruse any comment thread or discussion forum on the topic and there’s a good chance you’ll run into some of this discussion.

    I’m guessing the challenge is still on the table, but so far, Fishkin doesn’t seem top be having any problems.

    Of course, most people don’t have the link profile or reputation that Fishkin has established, but that also speaks to the need for content producers to work on building both.

  • Google Penguin Update Punishes WordPress Theme Creators?

    James Farmer at WPMU.org wrote a very interesting Penguin-related article, which doesn’t make the update look too great, despite its apparently honorable intentions.

    The update hit WPMU.org, sending it from 8,580 visits from Google on one day pre-Penguin to 1,527 a week later. Farmer shares an Analytics graph illustrating the steep drop:

    Penguin drop

    Farmer maintains that WPMU.org engages in no keyword stuffing, link schemes, and has no quality issues (presumably Panda wasn’t an issue).

    According to Farmer, the Sydney Morning Herald spoke with Matt Cutts about the issue (which may or may not appear in an article), and he provided them with three problem links pointing to WPMU.org: a site pirating their software, and two links from one spam blog (splog) using an old version of one of their WordPress themes with a link in the footer. According to Farmer, Cutts “said that we should consider the fact that we were possibly damaged by the removal of credit from links such as these.”

    That raises a significant question: why were pirate sites and splogs getting so much credence to begin with? And why did they make such an impact that this site with a reasonably sized, loyal audience appears to be a legitimate, quality site, with many social followers?

    Farmer wonders the same thing. He writes, “We’re a massively established news source that’s been running since March 2008, picking up over 10,400+ Facebook likes, 15,600+ Twitter followers and – to cap it all 2,537 +1s and 4,276 FeedBurner subscribers – as measured by Google!”

    “How could a bunch of incredibly low quality, spammy, rubbish (I mean a .info site… please!) footer links have made that much of a difference to a site of our size, content and reputation, unless Google has been absolutely, utterly inept for the last 4 years (and I doubt that that’s the case),” he adds.

    Farmer concludes that the site was punished for distributing WordPress themes. That is, specifically, for creating the themes that people wanted to use, and being punished because spammers also used them and linked to the site. He suggests to others who may have this issue that they remove or add nofollow to any attribution link they put in anything they release.

    Hat tip to SEOmoz CEO Rand Fishkin for tweeting the article. Fishkin, by the way, has acknowledged that Penguin hasn’t been Google’s greatest work. He recently told WebProNews, “It’s done a nice job of waking up a lot of folks who never thought Google would take this type of aggressive, anti-manipulative action, but I think the execution’s actually somewhat less high quality than what Google usually rolls out (lots of search results that look very strange or clearly got worse, and plenty of sites that probably shouldn’t have been hit).”

    The whole thing speaks volumes about what many have been saying about Penguin’s effects on negative SEO practices – the kind that Fishkin has challenged the web with. For Fishkin, however, everything seems to be going well so far.

    Google is usually quick to admit that “no algorithm is perfect,” and I’m guessing they know as much about Penguin. It will be interesting to see if sites that shouldn’t have been hit are recovered in reasonably timely fashion, although at this point, it’s hardly timely anymore.

  • Google’s Knowledge Graph: Less Traffic To More Sites?

    Back in March, the Wall Street Journal put out a big article about what apparently went on to become Google’s Knowledge Graph. Google made the formal announcement today. For more on the Knowledge Graph itself, see:

    Knowledge Graph: Google Officially Announces Its “Things” Results

    Google’s Knowledge Graph Comes With Mobile-Specific Capabilities

    Knowledge Graph Reduces Google’s Dependence On Keywords

    Knowledge Graph: Google Gets Tight With Wikipedia

    Or, you can just watch this video:

    When the WSJ put that article out, we wrote one talking about how Google is giving users less reasons to clickthrough to other sites. The Knowledge Graph would seemingly be a major push in that direction – the direction of more info directly on Google’s pages.

    Much of the Knowledge Graph seems to be powered by Wikipedia. You have to wonder how much less traffic Wikipedia will get out of this, as well as other sources providing the Knowledge Graph data, but more info on the page means potentially less clicks on other results on the page. Even if the Wikipedia page were the first result, it’s not necessarily the one the use would have clicked on. Seeing relevant information on the page before clicking, may just prevent clicks on any of the other links on the page.

    Certainly, this makes Google more efficient, but what does it mean for other sites? If the Knowledge Graph grows and grows, that impact could be far greater in the future.

    Danny Sullivan at Search Engine Land actually discussed this very topic with Google’s Amit Singhal (who announced the product on Google’s blog) at SMX London. Sullivan writes:

    Singhal’s response is that publishers shouldn’t worry. He said that most of these types of queries, Google has found, don’t take traffic away from most sites. Part of this seems to be that the boxes encourage more searching, which in turn still eventually takes people to external sites.

    Still, some are going to lose out, he admits. But he sees that as something that was going to happen inevitably, anyway, using a “2+2″ metaphor. If people are searching for 2+2, why shouldn’t Google give a direct answer to that versus sending searchers to a site? By the way, Google does do math like this already and has for years. Emphasis is mine.

    Google is only going to want to improve its Knowledge Graph, which can only mean more data, and information on more results pages.

    Additionally, Sullivan makes a great point about publishes putting together info that Wikipedia or Freebase (another of Google’s sources) could harvest. Dont’ forget that Wikipedia entries come with sources links. It doesn’t look like the original publishers who provided those sources will get much out of the Knowledge Graph’s offerings.

    Singhal did say at SMX that Google’s Search Plus Your World personalized search feature is improving clickthrough rates for search results. Perhaps there is something to be said for social signals after all.

  • New Bing Is Available To Everyone In The U.S.

    Last week, Bing unveiled some new changes to its search interface, including greater social integration, and a social pane on the right-hand side, showing results from your Facebook friends and “people who know” from Twitter. Bing calls this pane “sidebar”.

    There are other interesting features in this pane, such as the ability to share specific search results to your Facebook friends. For example, if you want to show your friends a certain band that you like, you can search for that band, and share all the links you from the search results you want to share, by clicking a link button that appears next to each result.

    Bing Social Results

    “With sidebar, Bing brings together the best of the web, with what experts and your friends know, giving you the confidence to act,” says Bing VP Derrick Connell. “This new way to search lets you share, discover, and interact with friends like you do in real life. If you’re on the go, you’ll notice we’ve optimized the layout and placement of the social results on the mobile device for smaller screen sizes and for touch input, so the user experience will be different than what people see on a PC.”

    “You may not always see friends you expect to show up for a number of reasons,” Connell notes. “Bing uses public Facebook information and content you’ve given Bing permission to use, such as friends’ photos on Facebook. We won’t match friends based on other Facebook content such as status updates or check-ins. Bing also respects you and your friends’ privacy settings so you won’t see friends who have opted out of Facebook instant personalization or blocked the Bing app.”

    Unfortunately, this functionality isn’t present on Bing’s image search results pages or its video search results pages. People like to share videos and images, so it seems like these would be good places for such functionality. Perhaps even more so than regular search results. I wouldn’t be surprised to see such feature appear in the future.

    If you’re in the U.S., you can go to bing.com/new, and access the new Bing. You can also go to Bing.com, and you should see a notification about it, prompting you to check it out.

    One might view Bing’s new socially-focused design as an aim at Google’s recently launched Search Plus Your World social search features. Google’s Amit Singhal said at a conference this week, by the way, that those results have improved clickthrough rates on search results.

    Bing does clearly have a major advantage on the social search side of things, with Facebook and Twitter integration, not present in Google’s offerings.

    According to Experian Hitwise, Bing.com searches in the U.S. were up 6% in April month-over-month. They were up 16% year-over-year. Google was down 3% month-over-month and 5% year-over-year.

  • Google: Personalized Search Results Are Lifting Clickthrough Rates

    Google launched Search Plus Your World earlier this year. Most Google users probably just know it as Google filling their results with a lot more results based on social connections. A lot of users complained about it, but Google appears to consider the whole thing a success (not unlike the Penguin update).

    Google Fellow Amit Singhal spoke at SMX London this morning, and talked about the feature, and search personalization in general.

    Daniel Waisberg at SMX sister site Search Engine Land liveblogged Singhal’s on-stage discussion with Danny Sullivan and Chris Sherman. Singhal indicated that the SPYW is actually increasing search result clicks, and that the filter bubble is not much of an issue. From Waisberg’s liveblog:

    Amit says the key motivation behind Search Plus Your World is to have a secured search, it is the first baby step to achieve Google’s dream, and data shows that Google users like the personal results. It also gives the user one click removal from their personalized results. Google is currently analyzing and improving their personalization engine.
    Chris mentions that personalization can be narrowing, as it gives people the same results and they do not discover new things. Amit answers that there should be different points of views in any search results, and Google is aware of that and they balance between personalized and non-personalized results.

    Danny mentions a Pew research that concluded that people do not want personalization. Amit says “I am a scientist, when I look at researches I look at how the question was asked.” He discussed the specific research, and said that personalization is valuable for Google users. Danny asks: can you tell what percentage of personalized searches are clicked? Amit says people are clicking more than before on searches and it is lifting CTR from search pages. Chris mentions Bing Social efforts and how it is different from Google’s. Amit says: “the key challenge with personalization is that no one can judge a personalized search for someone else.” That’s why Google looks at the data about how users like their results. Search Plus Your World is the same approach as Universal Search, people have to find what they intend to find on their results.

    Bing, as you may know, unveiled a big redesign last week, which appears to be the search engine’s answer to Google’s SPYW personalized results. Bing, of course, has data from Facebook and Twitter, which Google doesn’t, which should be one of Bing’s biggest selling points, if you care about social results.

    There hasn’t been much indicating that Google will be gaining access to the Facebook and Twitter data anytime soon. The subject was mentioned briefly during the SMX London discussion. Waisberg liveblogs: “Danny mentions the integration Bing did with Twitter and Facebook, and how this might be good for users. Will Google do that in the future? Amit said that their contract with Twitter expired. Google cannot add Twitter and Facebook right now as their information is hidden behind a wall. It has been tough to build an integration in this terms.”

    Google’s lack of this data is extremely evident at times – particularly the lack of realtime search when big, breaking events are happening.

    The good news is that at least Twitter and Google are talking frequently. Twitter CEO Dick Costolo was recently quoted as saying, “We continue to talk to Google frequently and on an ongoing basis. They are a company that’s doing several different things right now. Those conversations have a complexity to them that is different than our conversations with the company.”

    Who knows where these talks may one day lead.

  • Some Free Directories Go Missing From Google, Some Paid Directories Doing Well

    There some discussion going on in the webmaster/SEO community that Google may have de-indexed some free web directories. Barry Schwartz at Search Engine Roundtable points to a WebmasterWorld forum thread on the subject.

    The thread begins with a post from user Sunnyujjawal, who says:

    While checking some sites links I found 50% free submission directories are out of G now. Will Google count such links in negative SEO or unnatural linking?

    Schwartz concurs that about 50% of the ones he searched for did not have listings.

    He points to one example: global-web-directory.org. Indeed, I’m getting no results for that site:

    global web directory

    I’m not sure about the 50% thing though. I’ve looked at a number of others, and haven’t come across many that were not showing listings (though I have no doubt that there are more out there). Either way, there are still a lot of these sites that are still in Google’s index. We do know, however, that quite a few of them recently received PageRank reductions with the recent update.

    This discussion happens to come at a time when we’ve been analyzing Google’s quality guidelines, and its treatment of a certain directory, Best Of The Web, which sells reviews for potential listings, which appear with links that pass PageRank.

    Other directories that follow a similar model, may be experiencing similar treatment from Google. In that same WebmasterWorld thread, user Rasputin writes:

    I have a paid directory that I haven’t touched for about 3 years, only gets about 25 submissions ($10) a year – strange thing is, I just looked and not only is it well indexed but all the internal pages are now showing page rank – for a very long time they were all ‘greyed out’ after the google clamp-down on directories a couple of years ago.

    No idea when it came back, certainly nothing I’ve changed and pretty unlikely it’s attracted natural links.

    That’s pretty interesting.

    User Netmeg adds:

    I don’t think free or paid makes anywhere near as much of a difference as to whether or not the directory is actually curated for quality. Because if it isn’t, what other reason is there for it to exist other than to create links?

    That’s a very relevant point, and that seems to be Google’s reasoning, based on this video from Matt Cutts from several years ago:

    “Standard directory listings remain in our editors complete editorial control, and as such do not need the nofollow tag,” Best Of The Web President Greg Hartnett told WebProNews. “An editor looked at those listings (pay for review or not) and decided that they meet editorial guidelines and as such merit a listing. We vouch for that listing, so why would we nofollow it?”

    If you go to global-web-directory.org,’s submission page, it would appear that they violate Google’s quality guidelines. There is a pricing structure as follows:

    Express Reviews – $2
    Regular Reviews – Free
    Regular Reviews with reciprocal – Free

    While they advertise a paid review process, it’s clearly much different than how Best Of The Web operates. The only payment is for speeding up the review process, from the looks of it. Otherwise it’s free, and they’ll even throw in a reciprocal link for free. That could be the part that Google has a problem with. If sites are really being “reviewed” for quality, perhaps that is one thing, but if your’e saying flat out that you’ll give a link back, that might fall under Google’s “link schemes” criteria, discussed in the quality guidelines.

    It does list “links intended to manipulate PageRank” as the first example, and it does look like the site attempts to show the listings’ PageRank right with the listings.:

    If you really look around the site, however, you’ll find many category pages without listings, just displaying ads. It’s not hard to see why Google wouldn’t want this site in its index.

    Update: There’s an interesting post about this issue at Search News Central, from Terry Van Horne. Terry writes:

    Directories that would be candidates for this kind of “draconian” action were as good as de-indexed ages ago. We sent out our super staffer Mike, with our vetted list of directories to see what he could find. From that (top end list) we found 65 no change, 2 domains parked and 1 de-indexed site; roughly 1.3% were de-indexed.

    Next we went to our friends at Steam Driven Media for the last 100 (based on TBPR) from a list of 1500. From this group we found 1 with low indexation and 9 deindexed/gone – roughly 10% affected. Keep in mind, we have no idea how long these sites were out of the Google index.

    Van Horne questions whether directories are really “getting nuked or not”.

    So far, we’ve not really seen anything indicating it’s as big a change as made out to be by the original poster in the WebmasterWorld thread.

    Have you seen paid directories rising in Google? Free ones disappearing? Let us know what you’re seeing.

  • Google: Why Are You Asking Us If Your Ears Make You Look Fat?

    Google Fellow Amit Singhal spoke at SMX London this morning. Daniel Waisberg at SMX sister site Search Engine Land liveblogged the whole discussion. Towards the end, Singhal answered a humorous question from Danny Sullivan, who asked about funny searches Singhal had come across.

    The liveblog says: “Amit says that once he read a query along the lines ‘do my ear[s] make me look fat?’ Amit laughs: ‘why are you asking Google that? Go figure it alone!’”

    Judging by Google’s own search results for the query “do my ears make me look fat,” I’m guessing it’s because of the meme portrayed in the top three results, which all come from Cheezburger.com:

    Do my ears make me look fat?

    Do my ears make me look fat?

    Do my ears make me look fat?

    But maybe some people really do want to know if their ears make them look fat. I guess Google won’t be getting into physical criticism of its users anytime soon, although, you can probably judge for yourself how your ears make you look if you fire up a Google+ Hangout. Perhaps there are some universal search opportunities for Google there.

    A couple of us here at the office tried to ask Siri the same question several times, and just couldn’t get her to understand the question. She can’t seem to distinguish “my ears” from “my years”.

  • Watch Google’s Matt Cutts Give Some “Advice” On Ranking #1 (Humor)

    Google’s Matt Cutts has put out hundreds of videos as part of his webmaster help series. I’ll assure you that nothing like what you’re about to hear has ever appeared in any of them.

    Call it the anti-SEO help video of the decade, and if you’re a webmaster you can call it site suicide. You can laugh at this Matts Cutts parody video all you want, just don’t take any of its advice seriously.

    “In addition to keyword stuffing, we look at links to porn sites. Not that many people tend to link that much to sites within the porn industry. That’s the sort of thing that’s going to be really rewarding for users, so link to porn sites. Could it be annoying? Yes, it could be annoying, but that’s perfectly fine.”

    That’s one of the gems from this clever mashup from SEO guy Sam Applegate. He took (probably way to much) time to organize and analyze Cutt’s many videos and came up with this video on how to rank #1 in Google search. Except, as you may have derived from the last quote, this guide won’t have you ranking anywhere near #1.

    “I do think that Bing or Blekko or Duck Duck Go are potentially doing illegal things like hacking sites,” says Cutts in fragments. Check it out below:

    For his part, Cutts is aware of the video and his concern was with how much time it had to have taken its creator:

    @seosammo wow, how much time did that take? 2 days ago via web ·  Reply ·  Retweet ·  Favorite · powered by @socialditto

    [h/t Search Engine Roundtable]

  • Want To Tell Google How To Improve? Tell Amit Singhal.

    Matt Cutts fields a whole lot of questions about Google. He often offers helpful advice via his blog, comments on other blogs, Twitter, and of course through his Webmaster Help videos, but Google Fellow Amit Singhal is the guy that leads the team that looks at all the messed up search results.

    Singhal spoke at SMX London this morning, in an on-stage interview with Danny Sullivan and Chris Sherman. While he didn’t delve into Penguin too much, other than to indicate that it has been a success, he did talk a little bit about dealing with flawed search results. Daniel Waisberg liveblogged the discussion at SMX’s sister site Search Engine Land. Here’s the relevant snippet:

    Chris asks Amit how is the evolution process at Google with so many updates; how does Google decide about which update goes live? Google has an internal system where every flawed search result is sent to Amit’s team. Based on that engineers are assigned to problems and solutions are tested on a sandbox. Then the engineer will show how the results will show after and before the update and the update is tested using an A/B test. They discuss the results and this loop runs several times until they find a change that is better in all aspects. After this process the change is send to a production environment for a very low percentage of real user traffic and see how the CTR is changed. Based on this, an independent analyst (that works for Google) will generate a report. Based on that report the group discuss and decides if the change is going to be launched or not. That’s how scientific the process is.

    As Waisberg notes, Google has recently shared several videos discussing how Google makes changes. You can watch these if you’re interested:

    This one has Cutts talking about Google’s experimentation process (among other things):

    According to Sullivan, who tweeted since the keynote discussion, Singhal wants user feedback:

    Think you know how Google Search should run better? @theamitsinghal asked for advice. Leave your comments here http://t.co/fJFbe1QI 3 hours ago via Seesmic twhirl ·  Reply ·  Retweet ·  Favorite · powered by @socialditto

    On Twitter, he’s @theamitsinghal. Here’s his Google+ profile.

    Don’t forget, Google has a feedback link at the bottom of every search results page. Of course, there are always spam reports as well.

    Image: Amit’s Google+ Profile Pic

  • Google Algorithm Changes For April: Big List Released

    Google Algorithm Changes For April: Big List Released

    As expected, Google has finally released its big list of algorithm changes for the month of April. It’s been an interesting month, to say the least, with not only the Penguin update, but a couple of Panda updates sprinkled in. There’s not a whole lot about either of those on this list, however, which is really a testament to just how many things Google is always doing to change its algorithm – signals (some of them, at least) which could help or hurt you in other ways besides the hugely publicized updates.

    We’ll certainly be digging a bit more into some of these in forthcoming articles. At a quick glance, I noticed a few more freshness-related tweaks. Google has also expanded its index base by 15%, which is interesting. As far as Penguin goes, Google does mention: “Keyword stuffing classifier improvement. [project codename “Spam”] We have classifiers designed to detect when a website is keyword stuffing. This change made the keyword stuffing classifier better.”

    Keyword stuffing is against Google’s quality guidelines, and was one of the specific things Matt Cutts mentioned in his announcement of the update.

    Interestingly, unlike previous lists, there is no mention of Panda whatsoever on this list, though there were 2 known Panda data refreshes during April.

    Here’s the list in its entirety:

    • Categorize paginated documents. [launch codename “Xirtam3”, project codename “CategorizePaginatedDocuments”] Sometimes, search results can be dominated by documents from a paginated series. This change helps surface more diverse results in such cases.
    • More language-relevant navigational results. [launch codename “Raquel”] For navigational searches when the user types in a web address, such as [bol.com], we generally try to rank that web address at the top. However, this isn’t always the best answer. For example, bol.com is a Dutch page, but many users are actually searching in Portuguese and are looking for the Brazilian email service, http://www.bol.uol.com.br/. This change takes into account language to help return the most relevant navigational results.
    • Country identification for webpages. [launch codename “sudoku”] Location is an important signal we use to surface content more relevant to a particular country. For a while we’ve had systems designed to detect when a website, subdomain, or directory is relevant to a set of countries. This change extends the granularity of those systems to the page level for sites that host user generated content, meaning that some pages on a particular site can be considered relevant to France, while others might be considered relevant to Spain.
    • Anchors bug fix. [launch codename “Organochloride”, project codename “Anchors”] This change fixed a bug related to our handling of anchors.
    • More domain diversity. [launch codename “Horde”, project codename “Domain Crowding”] Sometimes search returns too many results from the same domain. This change helps surface content from a more diverse set of domains.
    • More local sites from organizations. [project codename “ImpOrgMap2”] This change makes it more likely you’ll find an organization website from your country (e.g. mexico.cnn.com for Mexico rather than cnn.com).
    • Improvements to local navigational searches. [launch codename “onebar-l”] For searches that include location terms, e.g. [dunston mint seattle] or [Vaso Azzurro Restaurant 94043], we are more likely to rank the local navigational homepages in the top position, even in cases where the navigational page does not mention the location.
    • Improvements to how search terms are scored in ranking. [launch codename “Bi02sw41”] One of the most fundamental signals used in search is whether and how your search terms appear on the pages you’re searching. This change improves the way those terms are scored.
    • Disable salience in snippets. [launch codename “DSS”, project codename “Snippets”] This change updates our system for generating snippets to keep it consistent with other infrastructure improvements. It also simplifies and increases consistency in the snippet generation process.
    • More text from the beginning of the page in snippets. [launch codename “solar”, project codename “Snippets”] This change makes it more likely we’ll show text from the beginning of a page in snippets when that text is particularly relevant.
    • Smoother ranking changes for fresh results. [launch codename “sep”, project codename “Freshness”] We want to help you find the freshest results, particularly for searches with important new web content, such as breaking news topics. We try to promote content that appears to be fresh. This change applies a more granular classifier, leading to more nuanced changes in ranking based on freshness.
    • Improvement in a freshness signal. [launch codename “citron”, project codename “Freshness”] This change is a minor improvement to one of the freshness signals which helps to better identify fresh documents.
    • No freshness boost for low-quality content. [launch codename “NoRot”, project codename “Freshness”] We have modified a classifier we use to promote fresh content to exclude fresh content identified as particularly low-quality.
    • Tweak to trigger behavior for Instant Previews. This change narrows the trigger area for Instant Previews so that you won’t see a preview until you hover and pause over the icon to the right of each search result. In the past the feature would trigger if you moused into a larger button area.
    • Sunrise and sunset search feature internationalization. [project codename “sunrise-i18n”] We’ve internationalized the sunrise and sunset search feature to 33 new languages, so now you can more easily plan an evening jog before dusk or set your alarm clock to watch the sunrise with a friend.
    • Improvements to currency conversion search feature in Turkish. [launch codename “kur”, project codename “kur”] We launched improvements to the currency conversion search feature in Turkish. Try searching for [dolar kuru], [euro ne kadar], or [avro kaç para].
    • Improvements to news clustering for Serbian. [launch codename “serbian-5”] For news results, we generally try to cluster articles about the same story into groups. This change improves clustering in Serbian by better grouping articles written in Cyrillic and Latin. We also improved our use of “stemming” — a technique that relies on the “stem” or root of a word.
    • Better query interpretation. This launch helps us better interpret the likely intention of your search query as suggested by your last few searches.
    • News universal results serving improvements. [launch codename “inhale”] This change streamlines the serving of news results on Google by shifting to a more unified system architecture.
    • UI improvements for breaking news topics. [launch codename “Smoothie”, project codename “Smoothie”] We’ve improved the user interface for news results when you’re searching for a breaking news topic. You’ll often see a large image thumbnail alongside two fresh news results.
    • More comprehensive predictions for local queries. [project codename “Autocomplete”] This change improves the comprehensiveness of autocomplete predictions by expanding coverage for long-tail U.S. local search queries such as addresses or small businesses.
    • Improvements to triggering of public data search feature. [launch codename “Plunge_Local”, project codename “DIVE”] This launch improves triggering for the public data search feature, broadening the range of queries that will return helpful population and unemployment data.
    • Adding Japanese and Korean to error page classifier. [launch codename “maniac4jars”, project codename “Soft404”] We have signals designed to detect crypto 404 pages (also known as “soft 404s”), pages that return valid text to a browser, but the text only contains error messages, such as “Page not found.” It’s rare that a user will be looking for such a page, so it’s important we be able to detect them. This change extends a particular classifier to Japanese and Korean.
    • More efficient generation of alternative titles. [launch codename “HalfMarathon”] We use a variety of signals to generate titles in search results. This change makes the process more efficient, saving tremendous CPU resources without degrading quality.
    • More concise and/or informative titles. [launch codename “kebmo”] We look at a number of factors when deciding what to show for the title of a search result. This change means you’ll find more informative titles and/or more concise titles with the same information.
    • Fewer bad spell corrections internationally. [launch codename “Potage”, project codename “Spelling”] When you search for [mango tea], we don’t want to show spelling predictions like “Did you mean ‘mint tea’?” We have algorithms designed to prevent these “bad spell corrections” and this change internationalizes one of those algorithms.
    • More spelling corrections globally and in more languages. [launch codename “pita”, project codename “Autocomplete”] Sometimes autocomplete will correct your spelling before you’ve finished typing. We’ve been offering advanced spelling corrections in English, and recently we extended the comprehensiveness of this feature to cover more than 60 languages.
    • More spell corrections for long queries. [launch codename “caterpillar_new”, project codename “Spelling”] We rolled out a change making it more likely that your query will get a spell correction even if it’s longer than ten terms. You can watch uncut footage of when we decided to launch this from our past blog post.
    • More comprehensive triggering of “showing results for” goes international. [launch codename “ifprdym”, project codename “Spelling”] In some cases when you’ve misspelled a search, say [pnumatic], the results you find will actually be results for the corrected query, “pneumatic.” In the past, we haven’t always provided the explicit user interface to say, “Showing results for pneumatic” and the option to “Search instead for pnumatic.” We recently started showing the explicit “Showing results for” interface more often in these cases in English, and now we’re expanding that to new languages.
    • “Did you mean” suppression goes international. [launch codename “idymsup”, project codename “Spelling”] Sometimes the “Did you mean?” spelling feature predicts spelling corrections that are accurate, but wouldn’t actually be helpful if clicked. For example, the results for the predicted correction of your search may be nearly identical to the results for your original search. In these cases, inviting you to refine your search isn’t helpful. This change first checks a spell prediction to see if it’s useful before presenting it to the user. This algorithm was already rolled out in English, but now we’ve expanded to new languages.
    • Spelling model refresh and quality improvements. We’ve refreshed spelling models and launched quality improvements in 27 languages.
    • Fewer autocomplete predictions leading to low-quality results. [launch codename “Queens5”, project codename “Autocomplete”] We’ve rolled out a change designed to show fewer autocomplete predictions leading to low-quality results.
    • Improvements to SafeSearch for videos and images. [project codename “SafeSearch”] We’ve made improvements to our SafeSearch signals in videos and images mode, making it less likely you’ll see adult content when you aren’t looking for it.
    • Improved SafeSearch models. [launch codename “Squeezie”, project codename “SafeSearch”] This change improves our classifier used to categorize pages for SafeSearch in 40+ languages.
    • Improvements to SafeSearch signals in Russian. [project codename “SafeSearch”] This change makes it less likely that you’ll see adult content in Russian when you aren’t looking for it.
    • Increase base index size by 15%. [project codename “Indexing”] The base search index is our main index for serving search results and every query that comes into Google is matched against this index. This change increases the number of documents served by that index by 15%. *Note: We’re constantly tuning the size of our different indexes and changes may not always appear in these blog posts.
    • New index tier. [launch codename “cantina”, project codename “Indexing”] We keep our index in “tiers” where different documents are indexed at different rates depending on how relevant they are likely to be to users. This month we introduced an additional indexing tier to support continued comprehensiveness in search results.
    • Backend improvements in serving. [launch codename “Hedges”, project codename “Benson”] We’ve rolled out some improvements to our serving systems making them less computationally expensive and massively simplifying code.
    • “Sub-sitelinks” in expanded sitelinks. [launch codename “thanksgiving”] This improvement digs deeper into megasitelinks by showing sub-sitelinks instead of the normal snippet.
    • Better ranking of expanded sitelinks. [project codename “Megasitelinks”] This change improves the ranking of megasitelinks by providing a minimum score for the sitelink based on a score for the same URL used in general ranking.
    • Sitelinks data refresh. [launch codename “Saralee-76”] Sitelinks (the links that appear beneath some search results and link deeper into the site) are generated in part by an offline process that analyzes site structure and other data to determine the most relevant links to show users. We’ve recently updated the data through our offline process. These updates happen frequently (on the order of weeks).
    • Less snippet duplication in expanded sitelinks. [project codename “Megasitelinks”] We’ve adopted a new technique to reduce duplication in the snippets of expanded sitelinks.
    • Movie showtimes search feature for mobile in China, Korea and Japan. We’ve expanded our movie showtimes feature for mobile to China, Korea and Japan.
    • No freshness boost for low quality sites. [launch codename “NoRot”, project codename “Freshness”] We’ve modified a classifier we use to promote fresh content to exclude sites identified as particularly low-quality.
    • MLB search feature. [launch codename “BallFour”, project codename “Live Results”] As the MLB season began, we rolled out a new MLB search feature. Try searching for [sf giants score] or [mlb scores].
    • Spanish football (La Liga) search feature. This feature provides scores and information about teams playing in La Liga. Try searching for [barcelona fc] or [la liga].
    • Formula 1 racing search feature. [launch codename “CheckeredFlag”] This month we introduced a new search feature to help you find Formula 1 leaderboards and results. Try searching [formula 1] or [mark webber].
    • Tweaks to NHL search feature. We’ve improved the NHL search feature so it’s more likely to appear when relevant. Try searching for [nhl scores] or [capitals score].
    • Keyword stuffing classifier improvement. [project codename “Spam”] We have classifiers designed to detect when a website is keyword stuffing. This change made the keyword stuffing classifier better.
    • More authoritative results. We’ve tweaked a signal we use to surface more authoritative content.
    • Better HTML5 resource caching for mobile. We’ve improved caching of different components of the search results page, dramatically reducing latency in a number of cases.

    More to come…

  • Penguin Update Will Come Back (Like Panda), According To Report

    Danny Sullivan put out a new article with some fresh quotes from Matt Cutts. From this, we know that he has deemed the Penguin update a success. In terms of false positives, he says it hasn’t had the same impact as the Panda or Florida updates, though Google has seen “a few cases where we might want to investigate more.”

    Sullivan confirmed what many of us had assumed was the case: Penguin will continue into the future, much like the Panda update. Cutts is even quoted in the article: “It is possible to clean things up…the bottom line is, try to resolve what you can.”

    The Good News

    Depending on your outlook, this could either be taken as good or bad news. On the good side of things, it means you can come back. Just because your site was destroyed by Penguin, you still have a shot to get back in Google’s good graces – even without having to submit a reconsideration request. Google’s algorithmically, assuming that it does what it is supposed to, will detect that you are no longer in violation of Google’s guidelines, and treat your site accordingly.

    The Bad News

    The bad news is that there is always the chance it won’t work like it’s supposed to. As I’m sure you’re aware, there are many, many complaints about the Penguin update already. Here’s an interesting one. Many feel like it’s not exactly done what it is supposed to. Another perhaps not so positive element of the news is that sites will have to remain on their toes, wondering if something they’ve done will trigger future iterations of the Penguin update.

    Remember when Demand Media’s eHow as not hit by the Panda update when it first launched, but was then later hit by another iteration of it, and had to delete hundreds of thousands of articles, and undergo a huge change in design, and to some extent, business model?

    But on the other hand, eHow content is the better for it, despite a plethora of angry writers who no longer get to contribute content.

    There’s always the chance that some sites have managed to escape Penguin so far, but just haven’t been hit yet. Of course, Danny makes a great point in that “for any site that ‘lost’ in the rankings, someone gained.”

    It will be interesting to see how often the Penguin update gets a refresh. There were two Panda refreshes in April alone (bookending the Penguin update). It might be even more interesting to see how many complaints there are when the refreshes come back, and how often they’re noticed. Even the last Panda update went unconfirmed for about a week.

    Either way, be prepared for Penguin news to come peppered throughout the years to come. Just like Panda. We’ll certainly continue to cover both.

  • AdWords Campaign Bid Simulator Launched

    AdWords Campaign Bid Simulator Launched

    Google announced the launch of the campaign bid simulator in AdWords. This is something of an extension of the bid simulator launched a few years ago. Previously, the bid simulator worked at the keyword and ad group level, but with the new launch, you can access it at the campaign level.

    It’s available in the Opportunities tab in AdWords, and lets you view bid changes (in aggregate) and model changes, “even when keywords or ad groups might not have enough data for this on their own,” according to AdWords Product manager Sheridan Kates.

    Campaign Bid Simulator

    Campaign Bid Simulator

    “See what might happen if you increased or decreased all your bids by a specific percentage (10%, for example),” says Kates, listing the functions. “See whether you may need to increase your campaign budget to ensure it doesn’t become limited by budget at the new bid value.”

    With the feature, you can also see what would happen if you changed all your bids to a single value.

    From the campaign bid simulator, you can download bid simulation data (at the account or campaign level) and a file (compatible with AdWords Editor) with bid amounts at the simulated level, and the ad groups/keywords where they should be applied, according to Google.

  • Mahmoud Mokhtar Google Doodle Celebrates “Father of Modern Egyptian Sculpture”

    Google has been feeling rather Egyptian lately when it comes to their Doodles. Earlier this week, they displayed a fun little piece honoring Howard Carter, the man who first uncovered King Tut’s tomb. Today, Google is celebrating not an archaeologist, but a famed sculptor who influenced the world of modern Egyptian art in a big way.

    Today’s Doodle is dedicated to the sculptures of Mahmoud Mokhtar, “the father of modern Egyptian sculpture.”

    Today marks the 121st birthday of Mahmoud Mokhtar, the father of modern Egyptian sculpture. Our doodle, on homepages in the Middle East and North Africa today, illustrates Mokhtar’s most famous statue, Egypt’s Renaissance.

    As Google says, much of the world will miss out on this Doodle. But the ones that see it are treated with a representation of Mokhtar’s most famous work, Nahdit Misr, which comprises the “g” and “l” of the Google logo. Nahdit Misr (Egypt’s Renaissance) is a work that strongly promotes Egyptian nationalism. It was completed in 1928 and now stands proudly at the gate of Cairo University.

    Mokhtar was one of the first artists to graduate from the School of Fine Arts in Cairo. After that, he traveled to Paris where he studied at their École des Beaux-Arts (School of Fine Arts). After producing many important works like “The Secret Keeper” and “The Nile’s Bride,” Mokhtar died in 1934 at the young age of 42.

    [Image via Wikipedia]

  • Google Penguin Update Recovery: Matt Cutts Says Watch These 2 Videos

    Danny Sullivan at Search Engine Land put up a great Penguin article with some new quotes from Matt Cutts. We’ve referenced some of the points made in other articles, but one important thing to note from the whole thing is that Cutts pointed to two very specific videos that people should watch if they want to clean up their sites and recover from the Penguin update.

    We often share Google’s Webmaster Help videos, which feature Cutts giving advice based on user-submitted questions (or sometimes his own questions). I’m sure we’ve run these in the past, but according to Sullivan, Cutts pointed to these:

    Guess what: in both videos, he talks about Google’s quality guidelines. That is your recovery manual, as far as Google is concerned. Here are some articles we’ve posted recently specifically on different aspects of the guidelines:

    Google Penguin Update: Don’t Forget About Duplicate Content

    Google Penguin Update: A Lesson In Cloaking

    Google Penguin Update Recovery: Hidden Text And Links

    Recover From Google Penguin Update: Get Better At Links

    Google Penguin Update: 12 Tips Directly From Google

    Google Penguin Update Recovery: Getting Better At Keywords

    Google Penguin Update: Seriously, Avoid Doorway Pages

    Google Penguin Update And Affiliate Programs

    So, in your recovery plan, take all of this into account, and these tips that Cutts lent his seal of approval to.

    And when all else fails, according to Cutts, you might want to just start over with a new site.