Let’s just get this out of the way right now – Google is insane (in a good way). They have pulled off so many pranks today that it’s hard to keep track of all of them. There are so many and they are all pretty fantastic. Let’s jump on the fantastic journey that is Google’s massive April Fools Day campaign.
First, and still the best, was Google Maps 8-bit reimagining. Obviously inspired by classic NES games like Dragon Quest, it transformed the service into a real-world RPG of epic proportions. It even went ahead and transformed some famous landmarks around the world into 8-bit video game structures. I want a game based around this right now and Google Maps already has a sale right here if they ever did make a game.
Google Chrome also announced a new functionality that is sure to increase productivity tenfold – Multitask mode. This mode allows you to plug in and use as many mice as you have ports for. Use both your hands and maybe even your feet in your quest for multitasking.
It’s easy to forget that YouTube is part of the Google family sometimes. It’s easy to remember on April 1, however, because YouTube brings one of the best pranks of the bunch. The YouTube Collection allows you to order all of YouTube on DVD, Betamax, Laser Disc and more. It’s good for those times when you don’t have an Internet connection and want to watch dubstep remixes.
For mobile Gmail users, the company has introduced Gmail Tap. The new service simplifies typing on a mobile phone by reducing the 26 key count down to two. It does this by bringing back Morse code.
In an attempt to document the Australian outback, Google is equipping over a thousand Kangaroos with a 360-degree head camera for what they called Google Street Roo. The cameras will be powered by solar panels stitched onto the jackets that these special Kangaroos will be wearing. It’s a genius way of getting a natural look at what the Outback contains.
Google isn’t just content with innovating in the fields of maps and search. They are also bringing new technologies to AdWords. The company introduced Click-to-Teleport today and it allows customers to directly connect to businesses by teleporting themselves to said business through Search. The technology is in beta form so Google warns that users may become horribly disfigured like in “The Fly.”
I still find Google Search to be a bit vague sometimes in its results. It doesn’t get much better even when I use Advanced Search. Thankfully, Google has introduced Really Advanced Search. This allows users to search for terms with new metrics like “rhyming slang,” “font,” and “textured background.”
Google has also introduced a way for users to change the weather. Google says, “Don’t like the weather? Now you can change it in your region by selecting from the dropdown to change precipitation and setting your own temperature.” It’s a nifty option, and I would love to change it this week due to storms being in the podcast. I just don’t want to be held responsible for weather changing shenanigans like I’m Cobra Commander or something.
It’s already sold out, but it’s worth mentioning that Google Offers was selling $1 unlimited good parking karma. For only $1, drivers would get:
Prime spots when you need them
Repels parking tickets
Includes 1 space buffer on each side
Shopping cart protection plan
No parallel parking for first 6 months
The fun thing about Google’s pranks is that they are all actually real in some form or another. A lot of their prank Web services can actually be used and they’re fun to boot. Google has proven themselves to be one of the masters of a good April Fools prank because they make them fun.
What was your favorite Google prank? Were there others that you especially liked? Let us know in the comments.
Much of the discussion in the SEO community of late has been related to Google’s efforts to “level the playing field” for mom and pops vs. those with bigger marketing budgets, and comments to this effect made by Matt Cutts at SXSW recently. He indicated that Google is working on things that would make it so people who “over-optmize” their sites don’t necessarily rank better than others who didn’t worry about SEO, but have great content.
To a great extent, Google has been working on these kinds of things for a long time. The Panda update was certainly designed to make content quality matter more, but Google also regularly gives tips about how to optimize your site better and releases lists of algorithmic changes, which practically beg webmasters to try and exploit them. Google, of course doesn’t take this stance, but when they release the signals, people pay attention, and try to play to them. Why wouldn’t they?
Google knows this, of course, which is why they won’t release their entire list of signals, let alone talk about how much weight certain signals have compared to others, although if you pay close enough attention, you’ll sometimes catch hints at this too.
The whole de-indexing of paid blog/link networks plays to the whole making over-optimization matter less concept, but based on Google’s webmaster guidelines, it seems like doing so would have always fit into the company’s policy.
When you play the black hat, or even gray hat game, you’re taking a big risk of being dealt a damaging penalty. Google didn’t even hesitate to penalize its own site for violating guidelines (at least after they were called out on it), which may have even cost Chrome some browser market share.
Going white hat after playing it at a darker shade in the past isn’t necessarily going to help your rankings either though, as one blogger indicated in a recent post at SEOBullshit:
I did paid links, paid reviews, and never, ever did any shit like “cloaking”, “spam”, or “stuffing.” Hence, the “grey” hat campaign type. I had awesome content. I had a crawlable site. It was perfect in every way. I used paid links and reviews to scream at GoogleBot, “Hey, notice me! I’m right here! I have killer content and reputable sites link to it.” The results were great. The money. Terrific. I left the competition scratching their heads since my site was HTTPS, it was hard to reverse engineer as most link-finding tools couldn’t really find my backlinks.
However, the stress of running a grey-hat campaign eventually wears on you and you long for the peace of a white hat campaign. So, I hatched a plan to wean my site from grey and pray that the results weren’t too bad. I expected a 15-25% drop in SERPS and traffic which I could then recover by getting a big relevant, content piece linked up to the pages where I removed the TLA’s.
Fucking failure. Total and monstrous failure.
He continues on saying his total traffic drop was -72.5%.
Every time Google makes big adjustments to its algorithm, sites pay the price. Sometimes that price is deserved, and sometimes it’s not. I find that often, people tend to think they didn’t deserve to lose their rankings. Even with the latest Panda refresh, we had sites telling us about ranking declines.
The intro of a recent Entrepreneur article sums up the conundrum of the small business perfectly: “As a small business owner using the web to reach customers, you’ve surely been implementing search engine optimization tactics to make sure your site turns up high in web searches. But just when you might feel like you’re starting to get the hang of this SEO thing, it appears that search giant Google might start penalizing websites that are over-optimized.”
We understand that there are plenty of white hat SEO tactics that Google not only is OK with, but encourages. However, most people simply don’t know what SEO even is. Matt Cutts himself shared results this week from a survey he conducted, finding only one in five people in the U.S. have even heard of SEO.
It’s not surprising that sites would be tempted to go for the darker hat techniques. But as Google continues on this new (same) path of leveling the playing field, however, it may be more playing with fire than ever. And once you start, engaging in SEO’s dark arts, you may have a hard time returning to the lighter side, should you ever choose to do so.
Have you ever been helped or hurt by using black hat SEO tactics? Let us know in the comments (you don’t have to use your real name).
It looks like Facebook is finally taking search more seriously. The company is reportedly working overtime on improving its own search feature, which leads us to wonder if it may even have something bigger up its sleeve. We’ve written about the major opportunities Facebook has to make a big play in the search engine market and go head-to-head with Google several times in the past, and this news does very little to convince us this is not a possibility.
Bloomberg BusinessWeek reports that something like 24 Facebook engineers are working on “an improved search engine,” and the effort is being led by former Googler Lars Rasmussen. Interestingly enough, while I was working on this article I happened to get an email from a Googler pointing the report out to me. They didn’t say as much, but Google no doubt wants more attention brought to the fact that other major web entities have opportunities to compete with them. The EU is expected to make a decision in an antitrust investigation into Google as soon as after Easter.
Bloomberg cites “two people familiar with the project” as providers of this info. Presumably they are from Facebook itself, as the report says they didn’t want to be named because Facebook is in its pre-IPO quiet period. “The goal, they say, is to help users better sift through the volume of content that members create on the site, such as status updates, and the articles, videos, and other information across the Web that people “like” using Facebook’s omnipresent thumbs-up button,” the report, co-authored by Douglas MacMillan and Brad Stone, says. Emphasis added.
That last part is particularly interesting, but more on that later.
Facebook’s Search Feature
If you use Facebook (and given that Facebook has over 800 million users, there’s a good chance you do), you probably know that its search feature isn’t the greatest or most efficient tool for finding information. Sure, there are plenty of options to refine your search. You can view results by: all results, people, pages, places, groups, apps, events, music, web results, posts by friends, public posts, or posts in groups. Even still, the results are often unhelpful – even the filtered results.
Given Facebook’s enormous user base and all of the content that is posted to the social network every day, a competent search engine is needed badly. Just think how much more useful Facebook would be if you could easily use it to find things. As a business, think about how much better Facebook could work for you if you could better optimize for its search feature, and it delivered your product or service’s page to people searching with relevant needs – or perhaps better yet, when their friends are talking about or checking in at your business.
Facebook As A Search Engine
Again, there are a reported two dozen engineers working on improving Facebook’s search feature. It sounds like they’re really putting a lot of time and effort into it now. If it turns out to be a major improvement and is that useful, competing with Google for searches seems inevitable at one level or another.
Consider the emphasis Google and other search engines are putting on social these days. Earlier this year, Google launched “Search Plus Your World,” delivering results much more based on your social circles – particularly your Google+ circles. One major flaw to this approach is that people just aren’t using Google+ the way they’re using Facebook, no matter how Google chooses to deem a user an active user.
For many people (about 800 million or so), a Facebook search engine would much more closely resemble “search, plus their world”.
There are quite a few interesting angles to consider, should a true Facebook search engine become a reality. Would it be available only to users? Facebook has a whole lot of public content. Being signed in would only serve to make the results more personalized – kind of like with Google today – the main difference being that personalization with Facebook data is much more likely to be relevant than personalization based on Google+ interaction. This is not a slight on Google+ as a service. It’s just a fact that Facebook has been around for far longer, and has way more active users who engage with their closest friends and family members on a daily basis, sharing tons of photos, videos, status updates and links to web content.
Would Facebook even bother to index the public web the way Google and its peers do? Right now, Facebook uses Bing to pad its search results with web results. Facebook could continue this indefinitely, or it could simply compete with Bing too, somewhere down the road. Facebook doesn’t need to index the web the way Google does, however, to put a dent into Google’s search market share. Even if it can convince users to use its own revamped search feature for certain kinds of queries, that’s queries that users don’t need Google for.
I’ve long maintained that the biggest threat to Google’s search market share is likely not the threat of a single competitor, but the diversification of search in general. People are using more apps than ever (from more devices than ever), and just don’t have to rely on Google (or other traditional search engines) for access to content the way they used to. Take Twitter search, for example, which has become the go-to destination for finding quick info on events and topics in real time. When was the last time you turned to Google’s realtime search feature? It’s been a while, because it’s been MIA since Google’s partnership with Twitter expired last year. Sometimes a Twitter search is simply more relevant than a Google search for new information, despite Google’s increased efforts in freshness.
Google may even be setting itself up to push users to a Facebook search engine, should one arise. There has been a fair amount of discontent expressed regarding Google’s addition of Search Plus Your World. Much of this has no doubt been exaggerated by the media, but there is discontent there. What if Facebook had a marketing plan to go along with this hypothetical search engine? It shouldn’t be too hard for them to play that “search plus your actual world” angle up.
They’ve already done this to some extent. Not officially, exactly, but remember “Focus On The User” from Facebook Director of Product Blake Ross (with some help from engineers at Twitter and MySpace)?
And speaking of Twitter and MySpace, who’s to say they wouldn’t support a Facebook search engine, and lend access to their respective social data to make an even bigger, highly personalized social search engine? That could be incredibly powerful.
A conversation between two Business Insider writers would suggest that we won’t see Facebook as a “favorite web search engine any time soon,” but again, it doesn’t have to replace Google to make an impact.
About a year ago, we talked about a patent Facebook was awarded, called, “Visual tags for search results generated from social network information”. The description for that was:
Search results, including sponsored links and algorithmic search results, are generated in response to a query, and are marked based on frequency of clicks on the search results by members of social network who are within a predetermined degree of separation from the member who submitted the query. The markers are visual tags and comprise either a text string or an image.
That’s something else to keep in mind.
Revenue
There’s certainly plenty of opportunity to sell more Facebook ads (which are already getting pretty popular with businesses). It’s going to be much more about revenue at Facebook in the post-IPO world. Facebook is already superior to Google in terms of ad targeting by interest and demographic, as users can be targeted based on very specific things they have “liked”. Add search to the mix, and you also get users while they’re actively seeking something out – Google’s strong point. That’s the best of both worlds.
Facebook won’t have to please shareholders by showing that it can be a better search engine than Google, but if they can create a search engine or even just an internal search feature that people want to use, there is a huge opportunity to make plenty of revenue from that. It just may also result in some portion of searches that may have otherwise gone to Google (or Yahoo, Bing, Ask or whatever) to go to Facebook instead, along with more cumulative time spent on Facebook.
What do you think? Would you use a Facebook search engine as a user? As an advertiser would you consider it an attractive option? How about an AdSense-like ad network for publishers? Share your thoughts in the comments.
It’s certainly not a major change that would alter Google’s functionality greatly. If you really want to break it down, however, I would say that it is a step backwards in user experience, as you would have to perform an extra click to get the options you may want. Big deal? Obviously not, but I don’t really see how having to click an extra time helps anything.
That said, it does cater to the clean design style Google likes. Even today, Google’s homepage has very little clutter and looks very similar to the way it has always looked. Compare it to Yahoo’s homepage.
Either way, just because Google is testing a feature doesn’t mean it will actually become a feature, so it’s probably not worth worrying about too much yet. In fact, in my mind it’s such an insignificant change, it’s probably not worth worrying about at all. If anything, it simply prevents users from noticing other features Google offers, which doesn’t seem optimal for Google’s purposes.
On a related note, Google put out a 30-minute video of some of its user experience staff talking about how they approach UX.
Is it just me or does Google seem to be getting a lot of negative press lately? There seem to be an irregular amount of stories about Google and whether or not it’s “evil” making the rounds. There is even a Facebook page named, “Google is Evil“!
First, let’s look at what Google actually says. Here’s the company’s code of conduct. The part about evil says:
“Don’t be evil.” Googlers generally apply those words to how we serve our users. But “Don’t be evil” is much more than that. Yes, it’s about providing our users unbiased access to information, focusing on their needs and giving them the best products and services that we can. But it’s also about doing the right thing more generally — following the law, acting honorably and treating each other with respect.
The Google Code of Conduct is one of the ways we put “Don’t be evil” into practice. It’s built around the recognition that everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct. We set the bar that high for practical as well as aspirational reasons: Our commitment to the highest standards helps us hire great people, who then build great products, which in turn attract loyal users. Trust and mutual respect among employees and users are the foundation of our success, and they are something we need to earn every day.
The document takes the reader through the following sections: Serve Our Users, Respect Each Other, Avoid Conflicts of Interest, Preserve Confidentiality, Protect Google’s Assets, Ensure Financial Integrity and Responsibility, and Obey the Law.
The whole thing ends with the following line:
And remember . . . don’t be evil, and if you see something that you think isn’t right — speak up!
It seems like that part about trust might be the biggest area of concern, considering all of the talk out there on the newswires, the blogosphere, and the social networks.
Gizmodo has an article called “The Case Against Google“. This is mainly about privacy, how people “don’t trust Google with their data,” which is “new”.
“Many of us have entered into a contract with the ur search company because its claims to be a good actor inspired our trust,” writes the article’s author, Mat Honan. “Google has always claimed to put the interests of the user first. It’s worth questioning whether or not that’s still the case. Has Google reached a point where it must be evil?”
The article goes on to proclaim that “search is dying,” basically implying that “Search Plus Your World” is making people not want to use Google search anymore. I’ll be the first to admit that it’s got it’s issues, and that Google’s results could be a lot better these days, but there’s no way SPYW is killing Google search. Sorry. It’s just not. For all of the media outcry about it, I don’t know anybody in my personal life that has stopped using Google to search the web because of it. Most people just don’t care that much.
Google’s “Search Plus Your World,” which is essentially the integration of Google+ (with some other things sprinkled in) into search results, is no doubt driven by that ad revenue addiction in the end. The better Google+ does, the more Facebook-like data Google can get about you, and potentially use to help advertisers better target consumers. Some may find that evil, advertising is how Google makes its money, which allows Google to do more things. Google is a business. It’s not a charity, though it does have some particularly un-evil charitable initiatives.
Danny Sullivan, who spends much of his time specifically writing about Google and the search industry, even told the New York Times recently, “I don’t think they were ever not evil,” though he did go out of his way to put that into greater context in his own article. He references another quote he gave the NYT: “They are a big company, and any big company is always going to have something happen that they don’t expect. But these things keep happening where you can’t even trust their word.”
“It pains me to say it, when I know so many people at Google truly and honestly mean for their company to be doing good things, to be trusted,” he adds in his own article. “It also pains me when I know Google has done many good things for the web as a whole. The fact that sites don’t have to pay just for the chance to be showing up in ‘free’ listings in search engines is largely down to the force of Google.”
The laundry list of the company’s violations and missteps has only grown in recent years, and you can find one for almost every area of interest. Regarding social networking, Google was caught rigging their search results to display items from their sources — Google+, for example — before those of their competitors, like Twitter and Facebook. So much fair and unbiased search.
Last year, Google was caught hosting ads from online Canadian pharmacies that led to illegal importation of prescription drugs. Google was forced to forfeit $500 million — a little slap on the wrist. But it was in the area of privacy that the company seems to have really blown it.
“Scientists from Berkeley spoke out on the dangers of nuclear war, on atomic proliferation and things like this. And it was the scientists that got people concerned. It was the scientists who spoke out to make the world a better place. And that’s a responsibility that I think I and others have.”
“Truthfully, the idea for writing this article was prompted by a conversation I had this morning about Stanford University. Specifically, we were discussing how the students have noticeably shifted alliances dramatically over the last decade,” Enderle writes. “A scant ten year ago they hated Microsoft and Google was the White Knight, yet it is truly amazing how those positions have reversed today.”
He talks about how ads at Google were initially perceived as “little more than a necessary evil to generate money and fund the firm,” adding, “The most fascinating aspect of all this? The apparent internal dislike for ads as something ‘unclean.’ Yet, the now conflicted company appears focused (perhaps a better word would be addicted) on the revenue the ads generate.”
Ads are still Google’s main driver of revenue, so it is still the majority of Google’s funding. Doesn’t it make sense that Google would want to be “addicted” to the thing that is not only driving its business, but funding for innovation and more ambitious projects. You know they have self-driving cars, right? Did you know these cars are even inspiring state legislation?
Ads are only part of Enderle’s story, as he goes into operating strategy, mostly as a comparison to Microsoft’s strategy, and the much-publicized privacy issues with Safari.
In a PCWorld article, Enderle (again) asks if Google is “facing the beginning of the end.” This was in response to that much-talked about post from the former Google engineer James Whittaker, who said, “The Google I was passionate about was a technology company that empowered its employees to innovate. The Google I left was an advertising company with a single corporate-mandated focus.”
In the PCWorld article, Enderle does raise an interesting point: “Google should have worked to line up Microsoft and Facebook as partners, not competitors.” I’m not sure if it’s the right point, but it’s an interesting scenario to consider.
You might say Google brought this kind of discussion on itself, by simply making that “Don’t Be Evil” mantra a part of its code in the first place. Google has certainly grown a whole lot since that was created, but it does remain part of the company’s philosophy to this day, whether you think they’re honoring it or not. It may be the very fact that this is such a well-known part of the company’s founding and existence, which sparks a much heftier amount of criticism and stories with the words “Google” and “Evil” in the title together that other companies are immune to. It makes for catchy headlines, for sure.
It’s interesting that not a lot of the articles out there (at least the ones calling the company evil) are about how Google is changing the way it delivers its search results, which could greatly impact the traffic it sends to other sites. Given all the hoopla around the Panda update for the past year or so, which saw some companies having to layoff employees, it’s a bit surprising that there isn’t more focus on this part of the discussion in the more mainstream news. Google is increasingly finding ways to keep people on its own properties longer (which means less time on sites like yours). To be clear, I don’t think this is necessarily “evil” either, but it’s certainly significant to doing business on the web.
Google has over 200 signals it uses to rank results. Given Google’s legendary PageRank algorithm, based on links, it has led to a lot of people worrying about links way too much. That’s not to say quality links aren’t still important, but just because you have a whole bunch of links, it doesn’t mean your site is going to rank well.
Google’s Matt Cutts posted an interesting webmaster help video under the title: “Will Google Provide More Link Data For All Sites?” It’s Cutts’ response to the user-submitted question:
In the wake of the demise of Yahoo Site Explorer, does Google Webmaster Tools plan to take up the reigns this product once provided to SEO’s everywhere?
Cutts responds, “What I think you’re asking is actually code for ‘will you give me a lot of links?’ and let me give you some context about Google’s policies on that. I know that Yahoo Site Explorer gave a lot of links, but Yahoo Site Explorer is going away. Microsoft used to give a lot of links. And they saw so much abuse and so many people hitting it really, really hard that I think they turn that off so that people wouldn’t be tempted to just keep pounding them and pounding their servers.”
“So our policy has been to give a subsample of links to anybody for any given page or any given site– and you can do that with a link colon command–and to give a much more exhaustive, much more full list of links to the actual site owner,” says Cutts. “And let me tell you why I think that’s a little bit more of a balanced plan. Yahoo Site Explorer, they were giving a lot of links, but they weren’t giving links that Google knew about. And certainly, they don’t know which links Google really trusts. And so I think a lot of people sometimes focus on the low-quality links that a competitor has, and they don’t realize that the vast majority of times, those links aren’t counting.”
“So, for example, the New York Times sent us a sample of literally thousands of links that they were wondering how many of these count because they’d gotten it from some third party or other source of links,” he adds. “And the answer was that basically none of those links had counted. And so it’s a little easy for people to get obsessed by looking at the backlinks of their competitors and saying, ‘oh, they’re doing this bad thing or that bad thing.’ And they might not know the good links. And they might not know that a lot of those links aren’t counted at all.”
“So I also think that it’s a relatively good policy because you deserve to know your own links,” he continues. “I think that’s perfectly defensible. But it doesn’t provide that much help to give all the links to a competitor site unless you’re maybe an SEO, or your a competitor, or something along those lines. So for somebody like a librarian or a power searcher or something like that, using link colon and getting a nice sample, a fair fraction of links to a particular page or to a particular website, is a very good policy.”
“I think that’s defensible, but I don’t expect us to show all the links that we know of for all the different sites that we know of, just because people tend to focus on the wrong thing,” he concludes. “They don’t know which links really count. So they tend to obsess about all the bad links their competitors have and only look at the good links that they have. And it’s probably the case that surfacing this data makes it so that you’re helping the people who really, really, really want to try to get all their competitors backlinks or whatever. And I just think it’s a little bit more equitable to say, OK, you’re allowed to see as many of the backlinks as we can give you for your own site, but maybe not for every other site. You can get a sampling, so you can get an idea of what they’re like, but I wouldn’t expect us to try to provide a full snapshot for every single site.”
Links obviously aren’t everything, and if you follow Google’s changes, it’s easy to see that other signals have been given a lot more significance in recent memory. This includes things like content quality, social signals and freshness. If you’re that worried about the number of links you have, you’re living in the wrong era of search.
Granted, links have value beyond search ranking. They still provide more potential referrals to your site, but in terms of Google, the search engine is moving more and more away from the traditional 10 organic links anyway, with more personalized results, fresher results, blended (universal search) results, and more direct answers.
Google’s head of web spam, Matt Cutts, used the product to put out his own survey about SEO in which he determined that 1 in 5 in the U.S. have heard of SEO.
“In my world, everyone I talk to has heard of search engine optimization (SEO),” he says on Google+. “But I’ve always wondered: do regular people in the U.S. know what SEO is? With Google’s new Consumer Surveys product, I can actually find out. I asked 1,576 people ‘Have you heard of ‘search engine optimization’?”
“It turns out only 1 in 5 people (20.4%) in the U.S. have heard of SEO!” he says.
“The survey also turned up an interesting gender difference: almost 25% of men have heard of SEO, but only about 16% of women have,” Cutts notes. “Doing this sort of market research in the past would have been slow, hard, and expensive. Asking 1,500 people a simple question only costs about $150.”
The survey may only be a small set of people compared to the actual population of the country, but my guess is that’s not that far off. In my experience, outside of work, most people have no idea what SEO is.
Google has been cracking down on lesser quality content littering its search results a great deal over the past year – probably more than any other time in the search engine’s history. Obviously, to those who follow the search industry, the Panda update has been leading the charge in this area.
Google has been de-indexing blog networks that webmasters have essentially been paying to get links. Do you think this will improve Google’s results? Share your thoughts in the comments.
One way that content, including some lesser-quality content, has been able to manipulate Google’s algorithm is through paid links, and linking “schemes”. Google has long had policies against these things, and has not hesitated to penalize sites it busted. See JC Penney and Overstock.com incidents from last year, for a couple of examples (not necessarily the best examples of low quality, but of getting busted). Google even penalized its own Chrome landing page, after paid links set up by a marketing firm were discovered.
If such penalties can have such an impact on brands like these, think what they could do to lesser-known brands.
Google is now cracking down on blog networks, which have added sites to their networks in exchange for fees. BuildMyRank, in particular has received a lot of attention.
On a daily basis, we monitor our domain network to check metrics like page rank, indexed pages, etc. As with any link-building network, some de-indexing activity is expected and ours has been within a permissible range for the past two years. Unfortunately, this morning, our scripts and manual checks have determined that the overwhelming majority of our network has been de-indexed (by Google), as of March 19, 2012. In our wildest dreams, there’s no way we could have imagined this happening.
It had always been BMR’s philosophy that if we did things a bit different from other networks, we would not only have a better quality service to offer our users, but a longer life in this fickle industry. Sadly, it appears this was not the case.
In case you’re not familiar with how BMR actually works, it essentially sells link juice. In the “how it works” section, it explains that the backlinks it helps you build “help add extra link juice and added indexing speed”. This comes at prices up to $400/month. Here’s their video overview:
Word throughout the SEO community is that other blog networks have been getting de-indexed as well. Meanwhile, webmasters with links from these networks, have been getting messages from Google’s Webmaster Tools. SEOmoz shares a message from Google Webmaster Tools that some webmasters have received:
Dear site owner or webmaster of http://example.com/,
We’ve detected that some of your site’s pages may be using techniques that are outside Google’s Webmaster Guidelines.
Specifically, look for possibly artificial or unnatural links pointing to your site that could be intended to manipulate PageRank. Examples of unnatural linking could include buying links to pass PageRank or participating in link schemes.
We encourage you to make changes to your site so that it meets our quality guidelines. Once you’ve made these changes, please submit your site for reconsideration in Google’s search results.
If you find unnatural links to your site that you are unable to control or remove, please provide the details in your reconsideration request.
If you have any questions about how to resolve this issue, please see our Webmaster Help Forum for support.
Sincerely,
Google Search Quality Team
Is any of this really a surprise? If you’re paying a blog network, is this not basically paying for links? The most surprising thing is that sites have been getting away with it for so long, without facing the wrath of Google. That’s damn amazing, really.
“Don’t participate in link schemes designed to increase your site’s ranking or PageRank,” Google says in its Webmaster Guidelines. “In particular, avoid links to web spammers or ‘bad neighborhoods’ on the web, as your own ranking may be affected adversely by those links.”
It’s pretty clear.
Internet marketer Jennifer Ledbetter (otherwise known as PotPieGirl) wrote a fantastic article on this whole ordeal. “Let’s face it and be real,” she writes. “We’ve used any of these services, we know exactly WHY we use them, don’t we? We use them to get the in-content links to help our web pages rank better. Yes, we use them to manipulate Google rankings. We all know what we’re doing – we know Google frowns on that (ok, totally HATES that), but we do it anyway. So, please – no whining about how this isn’t ‘fair’, ok?”
SEOmoz CEO Rand Fishkin had some helpful advice on Twitter:
@randfish Rand FishkinIf you’ve been affected by Google’s recent link penalties, disclosing the details of how you acquired the links can speed up reconsideration1 day ago via web · Reply · Retweet · Favorite · powered by @socialditto
There has been a lot of discussion from webmasters worried that competitors will be able to hurt their sites by posting bad links to their content, and the general consensus, as it has been for years, is that if you get good links, it should counter the bad. Barry Schwartz at Search Engine Roundtable points to a quote from Google saying, “Our algorithms are pretty complex, it takes more than a handful of bad links to sway their opinion of a website. Even if Webmaster Tools shows a million links, then that’s not going to change things if those links are all ignored for ranking purposes.”
According to Google, you really shouldn’t be focusing on the number of links you have anyway. Matt Cutts put out a video last week talking about how Google doesn’t count a lot of your links.
“I think a lot of people sometimes focus on the low-quality links that a competitor has, and they don’t realize that the vast majority of times, those links aren’t counting,” Cutts said. “So, for example, the New York Times sent us a sample of literally thousands of links that they were wondering how many of these count because they’d gotten it from some third party or other source of links, and the answer was that basically none of those links had counted. And so it’s a little easy for people to get obsessed by looking at the backlinks of their competitors and saying, ‘oh, they’re doing this bad thing or that bad thing.’ And they might not know the good links. And they might not know that a lot of those links aren’t counted at all.”
“We often use characteristics of links to help us figure out the topic of a linked page,” the company said. “We have changed the way in which we evaluate links; in particular, we are turning off a method of link analysis that we used for several years. We often rearchitect or turn off parts of our scoring in order to keep our system maintainable, clean and understandable.”
While links are the foundation of PageRank, it seems to me that links have become less and less important in search visibility altogether. Don’t get me wrong. Links matter. Good links are great. Links from sources Google thinks are great are still great, but just having a bunch of inbound links won’t get you very far if they’re not significant links.
All of that said, you may be spending too much time obsessing over search in general, and would do better to consider other means’ of traffic. How dependent do you really want to be on an ever-changing algorithm? Expanding upon your social strategy is likely to pay off much better, and thankfully, the better you do in social channels, the better you’re likely to do in search.
Should Google be penalizing blog/link networks? Are links as important as they once were? Tell us what you think.
Google Research has released a new study looking at how often ad impressions are accompanied by associated organic results and how the incrementality of ad clicks vary with the rank of those results.
81% of ad impressions and 66% of ad clicks occur in the absence of an associated organic result on the first page of search results. All ad clicks in these situations are incremental.
On average, for advertisers who appear in the top rank organic slot, 50% of ad clicks are incremental. This means that half of all ad clicks are not replaced by organic clicks when search ads are paused.
For advertisers whose organic search results are in the 2nd to 4th position, 81% of ad clicks are incremental. For advertisers appearing in organic position of 5 or lower, 96% of ad clicks are incremental.
Google is careful to note in the study’s concluding remarks that results will vary for individual advertisers, and that advertisers with similar IAC (Incremental Ad Clicks) estimates may have very different organic rank for the terms in their ad campaign. Advertisers with similar organic rank may have very different IAC estimates.
If you’re blocking Google from crawling your javascript and CSS, you may potentially be hurting your own search rankings. It’s not that using that javascript and CSS will necessarily make you rank better, but if you don’t let Google crawl it, you’re not giving Google the entire picture of what’s on your page.
Matt Cutts posted a new webmaster help video, but this time, instead of responding to a user-submitted question like he usually does, he provides what he refers to as a public service announcement.
“If you block Googlebot from crawling javascript or CSS, please take a few minutes and take that out of the robots.txt and let us craw the javascript,” says Cutts. “Let us crawl the CSS, and get a better idea of what’s going on on the page.”
“A lot of people block it because they think, ‘Oh, this is going to be resources that I don’t want to have the bandwidth or something,” but Googlebot is pretty smart about not crawling stuff too fast, and a lot of people will do things like, they’ll check for Flash, but then they’re including some javascript, and they don’t realize that including that javascript – the javascript is blocked, and so we’re not able to crawl the site as effectively as we would like,” he says.
“In addition, Google is getting better at processing javascript,” he continues. “It’s getting better at things like looking at CSS [to] figure out what’s important on the page, so if you do block Googlebot, I would ask: please take a little time, go ahead and remove those blocks from the robots.txt so you can let Googlebot in, get a better idea of what’s going on with your site, get a better idea of what’s going on with your page, and then that just helps everybody in terms of if we can find the best search results, we can return them higher to users.”
“So thanks if you can take the chance. I know it’s kind of a common idiom for people to just say, ‘Oh, I’m gonna block javascript and CSS, but you don’t need to do that now, so please, in fact, actively let Googlebot crawl things like javascript and CSS, if you can.”
Update: A Google spokesperson gave WebProNews the following statement:
“A Japanese court issued a provisional order requesting Google to delete specific terms from Autocomplete. The judge did not require Google to completely suspend the Autocomplete function. Google is currently reviewing the order.
“Autocomplete is a feature of Google search that offers predicted searches to help you more quickly find what you’re looking for. These searches are produced by a number of factors including the popularity of search terms. Google does not determine these terms manually–all of the queries shown in Autocomplete have been typed previously by other Google users.”
Google’s autocomplete feature is getting the company in some hot water again.
Earlier this year, a court in Paris ruled against Google in a case involving autocomplete, where Google was showing a term that meant “crook” in suggested searches for an insurance firm. Google ended up being fined a reported $65,000.
Now, Google has reportedly been ordered by a Japanese court to suspend the autocomplete feature, after a search for the man’s name bought up suggestions associating it with crimes he apparently did not commit. The man’s lawyer reportedly said the feature breaches the man’s privacy – this, according to AFP.
Sure, autocomplete can turn up unfavorable results for the subject (whether or not this puts Google at fault is debatable), but is this really a privacy issue?
As you type, Google’s algorithm predicts and displays search queries based on other users’ search activities. If you’re signed in to your Google Account and have Web History enabled, you might also see search queries from relevant searches that you’ve done in the past. In addition, Google+ profiles can sometimes appear in autocomplete when you search for a person’s name. Apart from the Google+ profiles that may appear, all of the predicted queries that are shown in the drop-down list have been typed previously by Google users.
For certain queries, Google will show separate predictions for just the last few words. Below the word that you’re typing in the search box, you’ll see a smaller drop-down list containing predictions based only on the last words of your query. While each prediction shown in the drop-down list has been typed before by Google users, the combination of your primary text along with the completion may be unique.
Predicted queries are algorithmically determined based on a number of purely algorithmic factors (including popularity of search terms) without human intervention. The autocomplete data is updated frequently to offer fresh and rising search queries.
Interestingly, the name of the man involved in the Japanese case is not being known. Ironically, it’s likely that Google would suggest more results based on this very story for searches for his name, had it been made public, in effect diluting the results indicating that he had committed any crimes.
According to multiple reports, Google is not disabling the autocomplete feature in Japan on grounds that as it is based in the U.S. and the feature adheres to U.S. law, which the company reportedly says, cannot be regulated by Japanese law.
German architect Ludwig Mies van der Rohe (commonly called Mies) is connected to the phrase “less is more.” One of the truly influential figures in modern architecture, Mies called his simplistic but elegant designs “skin and bones” architecture. Today’s Google Doodle, a Googlized version of one of Mies’ most enduring works, celebrates his 126th birthday.
Mies was born in 1886, and began his architectural career in 1908 when he became an apprentice at the studio of Peter Behrens. There, he worked side by side with other modern architecture pioneers Walter Gropius and Le Corbusier. In the 1930s, he worked as the last director of the Bauhaus, the “German School of Building” that was actually opened by Gropius.
In 1937, he came to the United States and wast tapped to head the architecture department at the Illinois Institute of Technology (IIT). He worked from Chicago for his whole 31-year career in America. Here’s what the IIT has to say about his vision, on his birthday:
Mies van der Rohe believed that architecture should expressed the essence of its civilization – that the same things guiding our lives should build our homes, museums and offices. His buildings speak to our hope for simplicity, shaping our lived environment, and in doing so, illuminating life itself. Today we celebrate this legacy.
Today’s Google Doodle is modeled after one of his most famous buildings – Crown Hall. It is currently the home to the College of Architecture at his beloved IIT. The building encapsulates Mies’ style – a 220′ by 120′ rectangle with a sparse steel frame and glass panes. The top floor is one giant space, what he called a “universal space.”
It was completed in 1956 and added to the National Register of Historic Places in 2005.
Google has pushed out another Panda Update. The company tweeted about it as the weekend got underway, saying that about 1.6% of queries are “noticeably affected”.
The tweet links to the original announcement about the update (from before the public even knew it by the name Panda). Given that Google pointed to this article, it might be worth stepping back, and revisiting Google’s own explanation of the update.
The post was from Google’s Matt Cutts and Amit Singhal. “Our goal is simple: to give people the most relevant answers to their queries as quickly as possible,” the post began. “This requires constant tuning of our algorithms, as new content—both good and bad—comes online all the time.”
“Many of the changes we make are so subtle that very few people notice them,” it continued. “But in the last day or so we launched a pretty big algorithmic improvement to our ranking—a change that noticeably impacts 11.8% of our queries—and we wanted to let people know what’s going on. This update is designed to reduce rankings for low-quality sites—sites which are low-value add for users, copy content from other websites or sites that are just not very useful. At the same time, it will provide better rankings for high-quality sites—sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.”
Obviously, many algorithmic changes have been made since then, including quite a few to Panda itself. How much do you think Google’s results have improved over that time? Are they better?
“We can’t make a major improvement without affecting rankings for many sites,” the post went on. “It has to be that some sites will go up and some will go down. Google depends on the high-quality content created by wonderful websites around the world, and we do have a responsibility to encourage a healthy web ecosystem. Therefore, it is important for high-quality sites to be rewarded, and that’s exactly what this change does.”
Many, many sites did indeed go down. Some clearly deserved to do so, but for others, this was questionable. Granted, some were able to make recoveries, and Google admitted that the algorithm was not perfect.
For most of the time since Panda initially launched, Google had one main help thread, where webmasters could vent their frustration and state their claims as to why they felt their site was unjustly penalized by the algorithm. Google made it clear that they were reading the thread. Earlier this month, however, the thread got split up, though the company still encourages posting and finding old posts via search. Things just might not be as convenient as they were under one centralized thread.
Prior to the new Panda refreshed, as tweeted by Google, the last Panda update, in February, improved how Panda interacts with Google’s indexing and ranking systems. Google said it was “more integrated into our pipleines”. Google also said it was made “more accurate and more sensitive to recent changes on the web.”
Google announced today that it has released a new Site Speed report, with “all the key metrics” in an easy-to-read Overview report.
“The Overview report provides an at-a-glance view of essential information for measuring your site’s page loading metrics: Avg. Page Load Time by Browser, Country/Territory, and Page,” explains Google’s Mustafa M. Tikir. “Plus you can compare your site’s average performance over time to forecast trends and view historical performance. All of these tools can help you identify where your pages may be underperforming and adjust so more visitors land on your site instead of waiting in frustration or leaving.”
“Previously there was only one Site Speed report, this has been renamed to ‘Page Timings’”, adds Tikir. “On the Page Timings report, you can view your site’s load times in three ways: use the Explorer tab to explore average load time across dimensions, use the Performance tab to see how the load times break down by speed ranges, or use the Map Overlay tab to see how the load times breakdown by geography.”
Google notes that it has also updated the Intelligence Reports to include average site load times and all Page Timings metrics.
In addition to all of this, sites with less than 10,000 visits per day can increase the site speed sample rate up to 100% and get full samples for page load time.
Antitrust is quickly becoming a buzzword in the technology space as companies such as Google, Apple, and others are being investigated. The Federal Trade Commission is currently looking at Google in regards to claims that it is using its search power to favor its own products in search results.
Additionally, Apple is being investigated by the Department of Justice over e-book pricing. This week, the FTC also subpoenaed Apple in hopes of gaining further information in its antitrust investigation of Google.
Although these investigations are still pending, one can’t help but wonder what might come of them.
An event held earlier this week hosted by the Federalist Society looked at these issues and found that many antitrust specialists are against regulation. The Honorable James Miller III, a senior advisor with the Husch Blackwell law firm, was among the panelists and spoke to WebProNews about the discussion.
As the former Budget Director under President Reagan and the former chair of the FTC, Miller told us that antitrust concerns often arise when one company wants to get an upper hand on another company.
“A lot of the big companies are trying to use the government to suppress competition,” he said. “It’s a very cheap way of getting a jump on your competition – having the government do this job for you.”
When done with these motives, he believes antitrust enforcement is wrong. The problem he has is that it gives the government the ability to dictate how companies should operate. In addition, he said the companies lose focus on their products as a result of the investigations, which ultimately harms consumers.
“Search is not a government-run utility, established by law and thus subject to bureaucratic oversight. It is a service provided to consumers and businesses by companies, which have set up their operations using their own principles, proprietary technologies and algorithms. Each company is free to develop its own approach, fulfilling the needs of its customers as it perceives them.”
What’s more, the costs involved in this scrutiny are staggering to both the companies involved and the economy. At the slightest news of an investigation, most companies see their stock drop. Interestingly, Miller told us that, while he was over the FTC, the agency tried to keep its work confidential to avoid such a reaction.
With industries such as technology specifically, Miller said that antitrust enforcement is especially difficult since the tech space changes at a rapid pace. In other words, a company that may appear to have monopoly power could, in theory, disappear in 6 months. As a result, he thinks the government should be hesitant about regulating it.
In response to the scrutiny that Google is getting over its search power, Miller said that consumers still have options. If a user doesn’t like the experience on the search engine, according to Miller, the act of clicking to another search engine is “easy as pie.” He went on to say that it would be difficult to make these choices any better or easier for consumers.
“You have here an industry that is extraordinary, and America is leading the world,” he said. “To have our federal government come in and sort of mix things up, you need to be very careful that you don’t slow things down and hurt consumers in the process.”
Miller believes the government should approach tech companies with the future in mind. He said they should think about the long-term costs that these cases result in for the companies and taxpayers.
Interestingly, a new report from the Heritage Foundation on regulation as whole found that new federal regulations from the Obama administration are costing $46 billion dollars annually and that more expensive regulations are coming.
“It is correct that the regulation activity has grown leaps and bounds over the past several years and is imposing enormous costs on the American economy and reducing the rate of innovation and creativity from American companies,” said Miller.
Based on new research from the National Taxpayers Union, most consumers are against government intervention in search. As WebProNews reported last week, the study revealed that when users were asked if “the federal government should regulate the content and appearance of search engines and their results,” 64 percent strongly disagreed while only 3 percent strongly agreed.
Since it’s unclear how the DoJ and FTC will handle the current antitrust cases, it appears that this debate is a long way from being over.
Does antitrust enforcement produce harmful results? And, is the American economy suffering for this reason? Please share you thoughts.
Google often does Webmaster Central Hangouts on Google+. This gives webmasters an opportunity to connect with Googlers and learn valuable tips about how they can get more out of their sites, and out of Google.
Google’s Pierre Far announced a couple of upcoming hangouts for Tuesday, March 27, and Wednesday, March 28. Both begin at 2PM UK time, and last for an hour. Far writes:
US-based webmasters: please be careful with the time difference for these as Europe would have switched to summer time by then!
Where: Right here on Google+. It works best with a webcam + headset. You can find out more about Hangouts and how to participate at http://goo.gl/k6aMv
Topic: Anything webmaster-related: Webmaster Tools, Sitemaps, crawling, indexing, duplicate content, websites, web search, etc.
To join, you obviously need a Google+ account. The thing is, they’re only limited to 10 participants, but people tend to come and go, so even if you can’t immediately get in, you might be able to squeeze in sometime within the hour. It’s a chance to get some direct advice about your site from Google, so depending on how pressing your issue is, it may be worth waiting to get in.
He has now put out a new video talking about how Google will treat the TLDs, in response to the user-submitted question:
How will Google treat the new nTLDs where any Top Level Domain is possible e.g. for corporations eg. www.portfolio.mycompanyname regarding influence on ranking and pagerank?
“Well we’ve had hundreds of different TLDs, and we do a pretty good job of ranking those,” says Cutts. “We want to return the best result, and if the best result is on one particular TLD, then it’s reasonable to expect that we’ll do the work in terms of writing the code and finding out how to crawl different domains, where we are able to return what we think is the best result according to our system.”
“So if you are making Transformers 9, and you want to buy the domain transformers9.movie or something like that, it’s reasonable to expect that Google will try to find those results, try to be able to crawl them well, and then try to return them to users.”
“Now there’s going to be a lot of migration, and so different search engines will have different answers, and I’m sure there will be a transition period where we have to learn or find out different ways of what the valid top level domains are, and then if there’s any way where we can find out what the domains on that top level domain are,” he says. “So we’ll have to explore that space a little bit, but it’s definitely the case that we’ve always wanted to return the best result we can to users, and so we try to figure that out, whether it’s on a .com, or a .de, or a dot whatever, and we’ll try to return that to users.”
Microsoft is discussing some changes it has rolled out to adCenter over the past few weeks. They should be improvements to the platform, as the company says they’re all based almost exclusively on customer feedback.The company runs down the list of changes in a blog post.
Changes include:
Ease of navigation in Web UI and Desktop: Menu redesign to allow for easy discovery of features
Browser Compatibility: adCenter support for Chrome and Safari
Improving Desktop Performance: Faster and more reliable experience in managing bulk tasks
Historical and Aggregated Quality Score: Improves campaign performance by having greater visibility into quality scores
In addition to these, they have several features currently in testing with select advertisers:
URL by Match Type: Beginning this week, we are introducing new functionality that will provide more precise control over advertiser campaigns. In addition to the ability to assign unique destination URLs for each keyword match type within the same ad group, advertisers may also assign unique parameters. The team is releasing this feature in waves beginning this week continuing through the end of October. Advertisers will receive a notification email at least one week before this feature is enabled for their account.
Improved Location Targeting: Advertisers may now reach more relevant users in targeted locations with fewer steps. Also, advertisers can now leverage new advanced location options to target users by physical location or by their physical location and intent.
Broad Match Modifier: A targeting feature that lets advertisers create keywords which have greater reach than phrase match, and more control than broad match. Adding modified broad match keywords help advertisers get more clicks and conversions at an attractive ROI, especially if they mainly use exact and phrase match keywords today.
According to the company, advertisers should expect these features to be generally available soon. On top of all of this, the company is working to launch more interactive ad formats for mobile.
adCenter, of course provides the paid search ads behind not only Bing, but Yahoo. Tthe adCenter portion of the companies’ search alliance is still rolling out to more countries. Last month, Microsoft announced the migration of Yahoo Search Marketing accounts in the UK, Ireland and France. IT has already been complete in the U.S. and Canada (though the organic search part is already worldwide).
Google announced changes to AdWords, which the company says will simplify the way advertisers buy and run display ads through AdWords.
“For nine years, AdWords customers have been buying display campaigns through an interface designed for search,” says Google’s AdWords team in a blog post. “This is like trying to run in glass slippers — it might work, but it’d be a lot more effective with the right running shoes. So we’re giving display its own tab within AdWords.”
The tab will roll out over the next few weeks.
In addition to the tab, Google announced a new contextual engine (the system that matches ads to pages based on keywords) update for AdWords, which the company calls the “biggest enhancement ever”. The company uploaded the following video of Director of Product Management for Display, Brad Bender, talking about both the tab and the updated engine:
Google says the update give the engine the ability to combine the reach of display with the “precision” of search using “next-gen” keyword targeting.
Some might say, however, that precision and search aren’t quite as synonymous as they oncer were in the online advertising world. There’s no question that Facebook is able to get much more precise when it comes to targeting based on demographic and interest. Google will no doubt try to improve on this with Google+ and its revamped privacy policy, but it just doesn’t have the data about web users that Facebook does.
That said, the advantage and precision of search, in comparison, comes with the fact that search ads deliver on timing when the user is actually looking for something in particular. Imagine if Google is able to get the Facebook-type data and have the best of both worlds.
On the update engine, Google says, “For example, let’s say you’re running display campaigns for a Travel Agency who offers a vacation packages in several Caribbean islands. In the past, you would have created themed ad groups targeting vacations to Turks and Caicos and the Caribbean. Now, with this new keyword level transparency you might realize that the keyword ‘Turks and Caicos vacations’ is 4 times more profitable than the keyword ‘caribbean vacations’. You can optimize your campaigns to aggressively target these high performing keywords, and be more conservative on ‘caribbean vacations’.”
Along with all of this, Google is launching a “targeting diagram” feature, to help advertisers better visualize the reach of campaigns, and see how they’re impacted by combining various targeting types.
Google announced a new “status insights” icon in the ads tab in AdWords accounts, which Google says provides visibility into the approval status and potential policy limitations of each ad creative. Google says it’s a way to diagnose your ads faster.
“The new icon will be particularly valuable if you’re advertising products or services that are restricted by our advertising policies to show only in specific countries or with certain keywords,” says Katie Miller of Google’s Inside AdWords crew.
“We’ll tell you if the individual ad is showing for the default keyword and location displayed in the hover,” she adds.
To use the icon, hover over the speech bubble in the Status column. You can re-diagnose an ad with a different target location or keyword by editing the parameters.
Last week, Google announced that it increased campaign limits to 10,000 (including active and paused campaigns) per account.
No doubt Google has been one of the most influential characters on the internet and the search giant has changed the way we view the world and interact with it. When Google was first introduced in 1998 things looked a lot different. Their endless pursuits in refining search has given way to endless changes to their algorithm.
From 1998 to present there has been over 500 changes and this infographic attempts to reveal some of the major changes and milestone alteration. It’s an awful lot of information, but it may make sense to some of you.