WebProNews

Tag: SEO

  • Google: Panda 4.0 Brings in ‘Softer Side,’ Lays Groundwork For Future

    Back in March, Google’s Matt Cutts spoke at the Search Marketing Expo, and said that Google was working on the next generation of the Panda update, which he said would be softer and more friendly to small sites and businesses.

    Last week, Google pushed Panda 4.0, which Cutts reiterated is a bit softer than previous versions, and also said will “lay the groundwork” for future iterations.

    Barry Schwartz at SMX sister site Search Engine Land, who was in attendance at the session in which Cutts spoke about the update, gave a recap of his words at the time:

    Cutts explained that this new Panda update should have a direct impact on helping small businesses do better.

    One Googler on his team is specifically working on ways to help small web sites and businesses do better in the Google search results. This next generation update to Panda is one specific algorithmic change that should have a positive impact on the smaller businesses.

    It’s interesting that Google even announced the update at all, as it had pretty much stopped letting people know when new Panda refreshes were launched. The world is apparently not bored enough with Panda updates for Google to stop announcing them entirely.

    Here’s a look at Searchmetrics’ attempt to identify the top winners and losers of Panda 4.0.

    Image via YouTube

  • Google Names JavaScript Issues That Can Negatively Impact Your Search Results, Readies New Webmaster Tool

    Ever wonder about how Google handling the JavaScript on your site? It’s a common question, as Google’s Matt Cutts has discussed it several times in Webmaster Help videos.

    Google took to its Webmaster Central blog on Friday to talk about it even more, and offer a bit of perspective about just how far it’s come when it comes to handling JavaScript since the early days when it basically didn’t handle it at all.

    Beyond patting itself on the back though, Google offers some useful information – specifically things that may lead to a negative impact on search results for your site.

    “If resources like JavaScript or CSS in separate files are blocked (say, with robots.txt) so that Googlebot can’t retrieve them, our indexing systems won’t be able to see your site like an average user,” the post, co-written by a trio of Googlers, says. “We recommend allowing Googlebot to retrieve JavaScript and CSS so that your content can be indexed better. This is especially important for mobile websites, where external resources like CSS and JavaScript help our algorithms understand that the pages are optimized for mobile.”

    “If your web server is unable to handle the volume of crawl requests for resources, it may have a negative impact on our capability to render your pages. If you’d like to ensure that your pages can be rendered by Google, make sure your servers are able to handle crawl requests for resources,” the continues. “It’s always a good idea to have your site degrade gracefully. This will help users enjoy your content even if their browser doesn’t have compatible JavaScript implementations. It will also help visitors with JavaScript disabled or off, as well as search engines that can’t execute JavaScript yet.”

    Google also notes that some JavaScript is too complex or arcane for it to execute, which means they won’t be able to render the page fully or accurately. That’s something to keep in mind for sure.

    Also, some JavaScript removes content from the page, which prevents Google from indexing it.

    Google says it’s working on a tool for helping webmasters better understand how Google renders their site, which will be available in Webmaster Tools within days.

    Image via Google

  • Searchmetrics Lists Winners And Losers Of Google’s New Panda Update

    As reported earlier this week, Google has launched a new version of the Panda update, which the company has officially dubbed version 4.0. It just happened to come around the same time as another big algorithm update aimed at cleaning up spam (a new version of the “Payday Loans” update).

    When Google launches new Panda updates, Searchmetrics typically attempts to identify the winners and losers. It’s not always been 100% accurate in the past, and when another update is launched around the same time, things can be a little more complicated, but here’s what they came up with this time:

    Losers:

     

    domain percent
    ask.com > – 50%
    ebay.com > – 33%
    biography.com > – 33%
    retailmenot.com > – 33%
    starpulse.com > – 50%
    history.com > – 33%
    isitdownrightnow.com > – 50%
    aceshowbiz.com > – 75%
    examiner.com > – 50%
    yellowpages.com > – 20%
    yourtango.com > – 75%
    dealcatcher.com > – 50%
    livescience.com > – 50%
    webopedia.com > – 50%
    xmarks.com > – 50%
    simplyrecipes.com > – 33%
    siteslike.com > – 50%
    digitaltrends.com > – 50%
    health.com > – 50%
    spoonful.com > – 75%
    songkick.com > – 75%
    realsimple.com > – 33%
    appbrain.com > – 33%
    thehollywoodgossip.com > – 50%
    dealspl.us > – 33%
    techtarget.com > – 33%
    gossipcop.com > – 50%
    rd.com > – 75%
    chow.com > – 33%
    doxo.com > – 50%
    heavy.com > – 50%
    csmonitor.com > – 33%
    toptenreviews.com > – 20%
    parenting.com > – 50%
    globalpost.com > – 75%
    espnfc.com > – 50%
    serviceguidance.com > – 50%
    mnn.com > – 75%
    mystore411.com > – 50%
    urlm.co > – 33%
    delish.com > – 50%
    healthcentral.com > – 33%
    whatscookingamerica.net > – 50%
    columbia.edu > – 20%
    songlyrics.com > – 20%
    internetslang.com > – 33%
    ibiblio.org > – 50%
    webutation.info > – 50%
    cheapflights.com > – 33%
    mybanktracker.com > – 50%

    Winners:

    domain percent
    glassdoor.com > 100%
    emedicinehealth.com > 500 %
    medterms.com > 500 %
    yourdictionary.com > 50%
    shopstyle.com > 250%
    zimbio.com > 500 %
    myrecipes.com > 250%
    couponcabin.com > 250%
    buzzfeed.com > 25%
    consumeraffairs.com > 100%
    wordpress.com > 20%
    thinkexist.com > 250%
    onhealth.com > 250%
    alternativeto.net > 100%
    whosdatedwho.com > 250%
    reverso.net > 50%
    wikimedia.org > 100%
    dogtime.com > 100%
    findthebest.com > 50%
    eatingwell.com > 100%
    quotegarden.com > 100%
    goodhousekeeping.com > 250%
    everydayhealth.com > 25%
    simplyhired.com > 100%
    momswhothink.com > 100%
    similarsites.com > 100%
    southernliving.com > 50%
    theknot.com > 25%
    allaboutvision.com > 100%
    openculture.com > 50%
    babyzone.com > 50%
    tasteofhome.com > 33%
    gotquestions.org > 100%
    movie4k.to > 50%
    wmagazine.com > 33%
    ycharts.com > 100%
    historyplace.com > 50%
    rcn.com > 100%
    salary.com > 50%
    skepdic.com > 100%
    mediawiki.org > 100%
    oodle.com > 100%
    abbreviations.com > 100%
    homes.com > 100%
    spokeo.com > 50%
    hlntv.com > 33%
    sparkpeople.com > 33%
    hayneedle.com > 50%
    emedtv.com > 100%

    BuzzFeed wins again!

    The losers list is interesting this time around – particularly the eBay and RetailMeNot entries (this isn’t the first time we’ve seen Ask on the list).

    Searchmetrics founder Marcus Tober concludes:

    Some sites that should be potentially on the Panda loser list have actually shown a positive development. This could be the “learning from mistakes” (also from others), as some of these candidates have (now) written their own content. The losers on the other hand, tend to show syndicated content or even duplicate content. But this doesn’t mean that this is the end of the update. Google proofed in the past that they are able to perform improvements or rollbacks really fast. So we will see what will happen over the next weeks.

    By the way, the Payday-Loan losers can be identified pretty easy -> URL completely removed from the index (it’s more like a link -based loss) or not (rather Panda).

    We already knew that eBay took a hit thanks to some digging by Dr. Peter J. Meyers at Moz, who points to specific keywords where eBay pages fell out of the top ten.

    RetailMeNot actually released a statement following the release of the Searchmetrics charts, in which it said:

    The company believes these reports greatly overstate the impact on RetailMeNot.com. Over RetailMeNot’s history, search engines have periodically implemented algorithm changes that have caused traffic to fluctuate. It is too early to judge any potential impact of the latest Google algorithm change. While RetailMeNot’s traffic with Google continues to grow year-over-year, the company has experienced some shift in rankings and traffic. The company continues to believe its focus on content quality and user experience will continue to help grow the business, enable consumers to save money and drive retailer sales.

    With RetailMeNot’s 600,000 digital offers from over 70,000 retailers and brands, the company believes it offers consumers the largest selection of digital offers. Since a substantial portion of those offers are not monetized today, traffic fluctuations do not necessarily correlate to financial performance. At this time, RetailMeNot does not have an update to its financial guidance.

    RetailMeNot has made considerable strides to diversify its traffic sources. In the first quarter 2014, approximately 35% of RetailMeNot’s traffic came from sources other than search engines. RetailMeNot has its highest brand awareness metrics in the company’s history, and millions of consumers are coming to RetailMeNot directly through its mobile applications.

    RetailMeNot’s strategy remains unchanged. The company will continue to strive to provide consumers with the best user experience and highest quality offers from leading retailers and brands.

    You have to wonder if any of the Panda 4.0 losers will face decisions like Meta Filter and others have had to in the past, and lay off employees.

    Keep in mind that Searchmetrics’ lists should be taken with a grain of salt.

    Image via Wikimedia Commons

  • Google Pushes MetaFilter To Layoffs

    Sites fall victim to Google’s algorithms all the time, but this week, one in particular is getting a great deal of attention. MetaFilter, which was a popular web destination years ago (it claims to have still had over 80 million readers last year), was hit by a Google update (possibly Panda, but it’s unclear) a year and a half ago, and has been unable to recover.

    The site’s founder Matt Haughey blogged this week about how the decline in Google traffic has led him to lay off a few of the site’s staff. On Monday, he wrote the “State of MetaFilter” post. It begins:

    Today I need to share some unfortunate news: because of serious financial downturn, MetaFilter will be losing three of its moderators to layoffs at the end of this month. What that means for the site and the site’s future are described below.

    While MetaFilter approaches 15 years of being alive and kicking, the overall website saw steady growth for the first 13 of those years. A year and a half ago, we woke up one day to see a 40% decrease in revenue and traffic to Ask MetaFilter, likely the result of ongoing Google index updates. We scoured the web and took advice of reducing ads in the hopes traffic would improve but it never really did, staying steady for several months and then periodically decreasing by smaller amounts over time.

    The long-story-short is that the site’s revenue peaked in 2012, back when we hired additional moderators and brought our total staff up to eight people. Revenue has dropped considerably over the past 18 months, down to levels we last saw in 2007, back when there were only three staffers.

    In a Medium post, Haughey says the site has been getting emails from others asking them to remove links because Google had told them the links were “inorganic”.

    Haughey claims, however, that they have a staff of six full-time moderators in five timezones making sure zero spam ends up on the site.

    Other sites have been publishing sympathetic posts, wondering if Google has simply made a big mistake when it comes to MetaFilter.

    Former Googler (and MetaFilter member) David Auerbach writes for Slate, “If, like many Slate readers, you’re considering a septum piercing, MetaFilter’s page on pros and cons is far more informative (and better-spelled) than Yahoo Answers’ or Body Jewellery [sic] Shop’s (both of which Google ranks above MetaFilter if you search on ‘septum piercing pros and cons’). In short, MetaFilter is the sort of site that makes the Web better.”

    Danny Sullivan has a long piece about it at Search Engine Land, which despite its length and the expertise he brings to the table, fails to come up with a real conclusion as to why the site was hit. He does, however, make some great points about how Google should be a little more helpful to sites that are hit, in giving them information about why.

    MetaFilter and Google’s Matt Cutts have been discussing things though, so perhaps a resolution is on the horizon. Or at the very least, maybe the site is getting a better idea about what went wrong.

    If it weren’t for the whole inorganic links thing, I’d wonder if it had anything to do with the overall appearance of the site. The site has been around for a long time, and frankly it looks like a site that came out a long time ago. That’s not a knock. Just a fact (okay, an opinion, I guess).

    While many of us are perfectly fine with an older-style site design showing up in search results as long as the content is good, I can’t help but be reminded of comments Cutts made in one of his Webmaster Help videos a while back, talking about established sites not being able to rank forever without evolving.

    “The advice that I’d give to you as the owner of a site that’s been around for fourteen years is to take a fresh look at your site,” he said. “A lot of times if you land on your site, and you land on a random website from a search result, you know, even if they’ve been in business for fifteen years, fourteen years, sometimes they haven’t updated their template or their page layout or anything in years and years and years, and it looks, frankly, like sort of a stale sort of an older site, and that’s the sort of thing where users might not be as happy about that.”

    “And so if you do run an older site or a very well-established site, I wouldn’t just coast on your laurels,” he adds. “I wouldn’t just say, ‘Well I’m number one for now, and everything is great,’ because newer sites, more agile sites, more hungry sites, more sites that have a better user experience – they can grow, and they can eclipse you if you don’t continue to adapt, and evolve, and move with the times. So I wouldn’t say just because you are a domain that’s well-established or has been around for a long time, you will automatically keep ranking. We’ve seen plenty of newer domains and businesses bypass older domains.”

    This may have absolutely nothing at all to do with MetaFilter’s situation, but Cutts’ words are at least worth noting.

    It may very well be that the site gets its rankings back after all of this, though that would be somewhat surprising given that the hit came so long ago. On the other hand, sometimes Google responds to conversations that become very public, and draw significant media attention. Or maybe Google just has a completely legitimate reason for not ranking the content in question better.

    I’m guessing we haven’t heard the end of the story just yet.

    This week, Google launched two new algorithm updates, including a new Panda update, and it looks like eBay is taking a bit of a hit this time.

    Image via MetaFilter

  • Google Launches Two Algorithm Updates Including New Panda

    Google makes changes to its algorithm every day (sometimes multiple changes in one day).

    When the company actually announces them, you know they’re bigger than the average update, and when one of them is named Panda, it’s going to get a lot of attention.

    Have you been affected either positively or negatively by new Google updates? Let us know in the comments.

    Google’s head of webspam Matt Cutts tweeted about the updates on Tuesday night:

    Panda has been refreshed on a regular basis for quite some time now, and Google has indicated in the past that it no longer requires announcements because of that. At one point, it was actually softened. But now, we have a clear announcement about it, and a new version number (4.0), so it must be significant. For one, this indicates that the algorithm was actually updated as opposed to just refreshed, opening up the possibility for some big shuffling of rankings.

    The company told Search Engine Land that the new Panda affects different languages to different degrees, and impacts roughly 7.5% of queries in English to the degree regular users might notice.

    The other update is the what is a new version of what is sometimes referred to as the “payday loans” update. The first one was launched just a little more than a year ago. Cutts discussed it in this video before launching it:

    “We get a lot of great feedback from outside of Google, so, for example, there were some people complaining about searches like ‘payday loans’ on Google.co.uk,” he said. “So we have two different changes that try to tackle those kinds of queries in a couple different ways. We can’t get into too much detail about exactly how they work, but I’m kind of excited that we’re going from having just general queries be a little more clean to going to some of these areas that have traditionally been a little more spammy, including for example, some more pornographic queries, and some of these changes might have a little bit more of an impact on those kinds of areas that are a little more contested by various spammers and that sort of thing.”

    He also discussed it at SMX Advanced last year. As Barry Schwartz reported at the time:

    Matt Cutts explained this goes after unique link schemes, many of which are illegal. He also added this is a world-wide update and is not just being rolled out in the U.S. but being rolled out globally.

    This update impacted roughly 0.3% of the U.S. queries but Matt said it went as high as 4% for Turkish queries were web spam is typically higher.

    That was then. This time, according to Schwartz, who has spoken with Cutts, it impacts English queries by about 0.2% to a noticeable degree.

    Sites are definitely feeling the impact of Google’s new updates.

    Here are a few comments from the WebmasterWorld forum from various webmasters:

    We’ve seen a nice jump in Google referrals and traffic over the past couple of days, with the biggest increase on Monday (the announced date of the Panda 4.0 rollout). Our Google referrals on Monday were up by 130 percent….

    I am pulling out my hair. I’ve worked hard the past few months to overcome the Panda from March and was hoping to come out of it with the changes I made. Absolutely no change at all in the SERPS. I guess I’ll have to start looking for work once again.

    While I don’t know how updates are rolled out, my site that has had panda problems since April 2011first showed evidence of a traffic increase at 5 p.m. (central, US) on Monday (5/19/2014).

    This is the first time I have seen a couple sites I deal with actually get a nice jump in rankings after a Panda…

    It appears that eBay has taken a hit. Dr. Peter J. Meyers at Moz found that eBay lost rankings on a variety of keywords, and that the main eBay subodmain fell out of Moz’s “Big 10,” which is its metric of the ten domains with the most real estate in the top 10.

    “Over the course of about three days, eBay fell from #6 in our Big 10 to #25,” he writes. “Change is the norm for Google’s SERPs, but this particular change is clearly out of place, historically speaking. eBay has been #6 in our Big 10 since March 1st, and prior to that primarily competed with Twitter.com for either the #6 or #7 place. The drop to #25 is very large. Overall, eBay has gone from right at 1% of the URLs in our data set down to 0.28%, dropping more than two-thirds of the ranking real-estate they previously held.”

    He goes on to highlight specific key phrases where eBay lost rankings. It lost two top ten rankings for three separate phrases: “fiber optic christmas tree,” “tongue rings,” and “vermont castings”. Each of these, according to Meyers, was a category page on eBay.

    eBay also fell out of the top ten, according to this report, for queries like “beats by dr dre,” “honeywell thermostat,” “hooked on phonics,” “batman costume,” “lenovo tablet,” “george foreman grill,” and many others.

    It’s worth noting that eBay tended to be on the lower end of the top ten rankings for these queries. They’re not dropping out of the number one spot, apparently.

    Either way, this is isn’t exactly good news for eBay sellers. Of course, it’s unlikely that Google was specifically targeting eBay with either update, and they could certainly bounce back.

    Have you noticed any specific types of sites (or specific sites) that have taken a noticeable hit? Do Google’s results look better in general? Let us know in the comments.

    Image via Thinkstock

  • Google Responds To Link Removal Overreaction

    People continue to needlessly ask sites that have legitimately linked to theirs to remove links because they’re afraid Google won’t like these links or because they simply want to be cautious about what Google may find questionable at any given time. With Google’s algorithms and manual penalty focuses changing on an ongoing basis, it’s hard to say what will get you in trouble with the search engine down the road. Guest blogging, for example, didn’t used to be much of a concern, but in recent months, Google has people freaking out about that.

    Have you ever felt compelled to have a natural link removed? Let us know in the comments.

    People take different views on specific types of links whether they’re from guest blog posts, directories, or something else entirely, but things have become so bass ackwards that people seek to have completely legitimate links to their sites removed. Natural links.

    The topic is getting some attention once again thanks to a blog post from Jeremy Palmer called “Google is Breaking the Internet.” He talks about getting an email from a site his site linked to.

    “In short, the email was a request to remove links from our site to their site,” he says. “We linked to this company on our own accord, with no prior solicitation, because we felt it would be useful to our site visitors, which is generally why people link to things on the Internet.”

    “For the last 10 years, Google has been instilling and spreading irrational fear into webmasters,” he writes. “They’ve convinced site owners that any link, outside of a purely editorial link from an ‘authority site’, could be flagged as a bad link, and subject the site to ranking and/or index penalties. This fear, uncertainty and doubt (FUD) campaign has webmasters everywhere doing unnatural things, which is what Google claims they’re trying to stop.”

    It’s true. We’ve seen similar emails, and perhaps you have too. A lot of sites have. Barry Schwartz at Search Engine Roundtable says he gets quite a few of them, and has just stopped responding.

    It’s gotten so bad that people even ask StumbleUpon to remove links. You know, Stumbleupon – one of the biggest drivers of traffic on the web.

    “We typically receive a few of these requests a week,” a spokesperson for the company told WebProNews last year. “We evaluate the links based on quality and if they don’t meet our user experience criteria we take them down. Since we drive a lot of traffic to sites all over the Web, we encourage all publishers to keep and add quality links to StumbleUpon. Our community votes on the content they like and don’t like so the best content is stumbled and shared more often while the less popular content is naturally seen less frequently.”

    Palmer’s post made its way to Hacker News, and got the attention of a couple Googlers including Matt Cutts himself. It actually turned into quite a lengthy conversation. Cutts wrote:

    Note that there are two different things to keep in mind when someone writes in and says “Hey, can you remove this link from your site?”

    Situation #1 is by far the most common. If a site gets dinged for linkspam and works to clean up their links, a lot of them send out a bunch of link removal requests on their own prerogative.

    Situation #2 is when Google actually sends a notice to a site for spamming links and gives a concrete link that we believe is part of the problem. For example, we might say “we believe site-a.com has a problem with spam or inorganic links. An example link is site-b.com/spammy-link.html.”

    The vast majority of the link removal requests that a typical site gets are for the first type, where a site got tagged for spamming links and now it’s trying hard to clean up any links that could be considered spammy.

    He also shared this video discussion he recently ad with Leo Laporte and Gina Trapani.

    Cutts later said in the Hacker News thread, “It’s not a huge surprise that some sites which went way too far spamming for links will sometimes go overboard when it’s necessary to clean the spammy links up. The main thing I’d recommend for a site owner who gets a fairly large number of link removal requests is to ask ‘Do these requests indicate a larger issue with my site?’ For example, if you run a forum and it’s trivially easy for blackhat SEOs to register for your forum and drop a link on the user profile page, then that’s a loophole that you probably want to close.
    But if the links actually look organic to you or you’re confident that your site is high-quality or doesn’t have those sorts of loopholes, you can safely ignore these requests unless you’re feeling helpful.”

    Side note: Cutts mentionedin the thread that Google hasn’t been using the disavow links tool as a reason not to trust a source site.

    Googler Ryan Moulton weighed in on the link removal discussion in the thread, saying, “The most likely situation is that the company who sent the letter hired a shady SEO. That SEO did spammy things that got them penalized. They brought in a new SEO to clean up the mess, and that SEO is trying to undo all the damage the previous one caused. They are trying to remove every link they can find since they didn’t do the spamming in the first place and don’t know which are causing the problem.”

    That’s a fair point that has gone largely overlooked.

    Either way, it is indeed clear that sites are overreacting in getting links removed from sites. Natural links. Likewise, some sites are afraid to link out naturally for similar reasons.

    After the big guest blogging bust of 2014, Econsultancy, a reasonably reputable digital marketing and ecommerce resource site, announced that it was adding nofollow to links in the bios of guest authors as part of a “safety first approach”. Keep in mind, they only accept high quality posts in the first place, and have strict guidelines.

    Econsultancy’s Chris Lake wrote at the time, “Google is worried about links in signatures. I guess that can be gamed, on less scrupulous blogs. It’s just that our editorial bar is very high, and all outbound links have to be there on merit, and justified. From a user experience perspective, links in signatures are entirely justifiable. I frequently check out writers in more detail, and wind up following people on the various social networks. But should these links pass on any linkjuice? It seems not, if you want to play it safe (and we do).”

    Of course Google is always talking about how important the user experience is.

    Are people overreacting with link removals? Should the sites doing the linking respond to irrational removal requests? Share your thoughts in the comments.

    Image via Twit.tv

  • Here’s Why Pinterest’s New Funding Is Good News For Your Business

    Here’s Why Pinterest’s New Funding Is Good News For Your Business

    Pinterest has reportedly raised a new $200 million round of funding, valuing the company at $5 billion. Investors are apparently impressed with the direction the already popular visual social media site is taking, which includes new native ads and enhancements to the search experience.

    Is Pinterest search part of your business strategy? Will it be in the future? Let us know in the comments.

    Pinterest raised two separate rounds last year, totaling $425 million, and has now raised a grand total of $764 million.

    ReadWrite shares this statement from Pinterest CEO Ben Silbermann:

    Pinterest has a vision of solving discovery and helping everyone find things they’ll love. This new investment gives us additional resources to realize our vision.

    “Solving discovery and helping everyone find things” makes it sound like search is going to continue to be the main focus.

    Improving Search

    Last month, the company launched Guided Search, which lets people find ideas for things like where to plan vacations, what to have for dinner, etc.

    “It’s made for exploring, whether you know exactly what you want, or you’re just starting to look around,” explained Hui Xu, head of the discovery team at Pinterest. “There are more than 750 million boards with 30 billion Pins hand-picked by travelers, foodies, and other Pinners, so the right idea is just a few taps away.”

    “Now when you search for something (road trips, running, summer BBQ), descriptive guides will help you sift through all the good ideas from other Pinners,” Xu added. “Scroll through the guides and tap any that look interesting to steer your search in the right direction. Say you’re looking for plants to green up your apartment, guides help you get more specific—indoors, shade, succulents—so you can hone in on the ones that suit your space. Or when it’s time for your next haircut, search by specific styles—for redheads, curly hair, layers—to find your next look.”

    Here’s a use case for plants:

    This was the second big search move by Pinterest this year. In January, it launched an improved recipe search experience, enabling users to search for ingredients (like whatever is in their refrigerators), to find collections of relevant recipes. It has filters like vegetarian, vegan, gluten-free, paleo, etc.

    One can see where this type of thing could be expanded to more verticals. The feature is part of Pinterest’s “more useful Pins” initiative, which uses structured data (like ingredients, cook time, and servings) to display more info right on the pin.

    Promoted Pins

    Search is one of the most obvious ways of monetizing the site, and they’re starting to do that as well. Earlier this week, Pinterest announced that it is rolling out the next phase of its Promoted Pins ad product, which it began testing last fall.

    Promoted Pins

    The company currently counts ABC Family, Banana Republic, Expedia, GAP, General Mills, Kraft, Lullemon Athletica, Nestle (Purina, Dreyer’s/Edy’s Ice Cream, Nespresso), Old Navy, Target, Walt Disney Parks and Resorts, and Ziploc, among its advertisers.

    “During the test brands will work with Pinterest to help ensure the pins are tasteful, transparent, relevant and improved based on feedback from the Pinterest community,” a Pinterest spokesperson told WebProNews in an email.

    “Tens of millions of people have added more than 30 billion Pins to Pinterest and brands are a big part of this,” said head of partnerships Joanne Bradford. “Brands help people find inspiration and discover things they care about, whether it’s ideas for dinner, places to go or gifts to buy. We hope Promoted Pins give businesses of all sizes a chance to connect with more Pinners.”

    The company will use this early group of advertisements to collect feedback, and will then open up them up to more businesses later this year.

    AdAge reported a couple months back that Pinterest was looking for spending commitments of between one and two million dollars.

    Later, Digiday shared a pitch deck from the company indicating that CPMs would be about $30, and that the company is seeking six-month commitments at roughly $150K per month ($900,000 total). Ads targeted upon search keywords will be priced on a CPC basis, it indicated, while those placed in “Everything & Popular Feeds” will be on a CPM basis. Promoted Pins can be placed in 32 different categories, according to that, and advertisers will be able to target US-only, the user’s location and the “metro-city level”. The ads will also be targeted based on device. Age will not be a targeting option initially, but apparently will become one later.

    Traffic To Your Site

    Promoted Pins might be out of your business’ reach for now, but there’s plenty of opportunity for some good old organic traffic. We recently looked at a report from Shareaholic on social media traffic referrals, which showed that Facebook referrals are growing significantly, with the social network leading all social sites. Guess what number 2 is.

    Pinterest may be significantly behind Facebook in this department, but look how much further ahead it is than all the rest, and look at the growth curve compared to the rest. Now consider that they’re only starting to make drastic search improvements. The site stands to only increase traffic referral potential.

    If you haven’t been using Pinterest for business, you may be unaware that it also recently added a new way for businesses to track their pinned links with support for Google Analytics UTM variables.

    “If you’re already using Google Analytics, it’s easy to see how your Pins are performing by tagging your Pin links with the correct UTM parameters,” explained Pinterest’s Jason Costa. “If you’ve already got UTM tracking on your Pin links, you’ll start to see more activity on your campaign and source tracking on Google Analytics.”

    Pinterest has suggested using humor, using quotes, going “behind-the-scenes,” including fans, highlighting products and spaces, offering exclusive content and “sneak peeks,” and helping users lived “inspired lives” as ways to generate more engagement and referrals.

    Keep in mind that Pinterest so far hasn’t been the greatest social channel for engagement after the click. Another recent Shareaholic report found it to be near the bottom of the list in the average time on site (your site) metric, and not all that great for the pages/visit metric either. But if you’re looking to get people to a specific page, you could do a lot worse.

    Pinterest Users Are Shoppers

    Rest assured, Pinterest users want to buy things.

    A new report out from Ahalogy finds that 52% of daily users are opening the app in stores. Mobile Marketer shares some commentary:

    “Pinterest is becoming a universal in-store shopping list,” said Bob Gilbreath, co-founder and president of Ahalogy, Cincinnati, OH. “Many Pinterest users claim to pin items at home and then pull up the app in store, for example, to remember that dress from Nordstrom that she pinned, or find the ingredients for a recipe that she pinned.

    “We’ve now got data to prove this is a common task: 28 percent of users claim to pull up the Pinterest app on their smartphones while shopping, and 52 percent of daily Pinterest users do this,” he said.

    “Only 27 percent of active Pinterest users claim to be following any brand on Pinterest, yet most believe that marketers can add value to the platform. Too many brands have been on the sidelines of Pinterest.”

    As Pinterest turns into more of a search destination, brands aren’t going to necessarily need to gain large follower counts for the channel to be effective. That is if they can gain visibility in the results. It’s only going to get more competitive.

    Are you getting significant traffic and/or conversions from Pinterest? Do you expect to going forward? Let us know in the comments.

    Images via Pinterest, Shareaholic

  • What Have Google’s Biggest Mistakes Been?

    What Have Google’s Biggest Mistakes Been?

    Do you feel like Google makes many mistakes when it comes to trying to improve its search results? Do you think they’ve gone overboard or not far enough with regards to some aspect of spam-fighting?

    In the latest Google Webmaster Help video, head of webspam Matt Cutts talks about what he views as mistakes that he has made. He discusses two particular mistakes, which both involve things he thinks Google just didn’t address quickly enough: paid links and content farms.

    What do you think is the biggest mistake Google has made? Share your thoughts in the comments.

    The exact viewer-submitted question Cutts responds to is: “Was there a key moment in your spam fighting career where you made a mistake that you regret, related to spam?”

    Cutts recalls, “I remember talking to a very well-known SEO at a search conference in San Jose probably seven years ago (give or take), and that SEO said, ‘You know what? Paid links are just too prevalent. They’re too common. There’s no way that you guys would be able to crack down on them, and enforce that, and come up with good algorithms or take manual action to sort of put the genie back in the bottle,’ as he put it. That was when I realized I’d made a mistake that we’d allowed paid links that pass PageRank to go a little bit too far and become a little bit too common on the web.”

    “So in the early days of 2005, 2006, you’d see Google cracking down a lot more aggressively, and taking a pretty hard line on our rhetoric about paid links that pass PageRank,” he continues. “At this point, most people know that Google disapproves of it, it probably violates the Federal Trade Commission’s guidelines, all those sorts of things. We have algorithms to target it. We take spam reports about it, and so for the most part, people realize, it’s not a good idea, and if they do that, they might face the consequences, and so for the most part, people try to steer clear of paid links that pass PageRank at this point. But we probably waited too long before we started to take a strong stand on that particular issue.”

    Yes, most people who engage in paid links are probably aware of Google’s stance on this. In most cases, gaming Google is probably the ultimate goal. That doesn’t mean they’re not doing it though, and it also doesn’t mean that Google’s catching most of those doing it. How would we know? We’re not going to hear about them unless they do get caught, but who’s to say there aren’t still many, many instances of paid links influencing search results as we speak?

    The other mistake Cutts talks about will be fun for anyone who has ever been affected by the Panda update (referred to repeatedly as the “farmer” update in its early days).

    Cutts continues, “Another mistake that I remember is there was a group of content farms, and we were getting some internal complaints where people said, ‘Look, this website or that website is really bad. It’s just poor quality stuff. I don’t know whether you’d call it spam or low-quality, but it’s a really horrible user experience.’ And I had been to one particular page on one of these sites because at one point my toilet was running, and I was like, ‘Ok, how do you diagnose a toilet running?’ and I had gotten a good answer from that particular page, and I think I might have over-generalized a little bit, and been like, ‘No, no. There’s lots of great quality content on some of these sites because look, here was this one page that helped solve the diagnostic of why does your toilet run, and how do you fix it, and all that sort of stuff.’”

    “And the mistake that I made was judging from that one anecdote, and not doing larger scale samples and listening to the feedback, or looking at more pages on the site,” he continues. “And so I think it took us a little bit longer to realize that some of these lower-quality sites or content farms or whatever you want to call them were sort of mass-creating pages rather than really solving users’ needs with fantastic content. And so as a result, I think we did wake up to that, and started working on it months before it really became wide-scale in terms of complaints, but we probably could’ve been working on it even earlier.”

    The complaints were pretty loud and frequent by the time the Panda update was first pushed, but it sounds like it could have been rolled out (and hurt more sites) a lot earlier than it eventually did. You have to wonder how that would have changed things. Would the outcome have been different if it had been pushed out months before it was?

    “Regardless, we’re always looking for good feedback,” says Cutts. “We’re always looking for what are we missing? What do we need to do to make our web results better quality, and so anytime we roll something out, there’s always the question of, ‘Could you have thought of some way to stop that or to take better action or a more clever algorithm, and could you have done it sooner? I feel like Google does a lot of great work, and that’s very rewarding, and we feel like, ‘Okay, we have fulfilled our working hours with meaningful work,’ and yet at the same time, you always wonder could you be doing something better. Could you find a cleaner way to do it – a more elegant way to do it – something with higher precision – higher recall, and that’s okay. It’s healthy for us to be asking ourselves that.”

    It’s been a while since Google pushed out any earth-shattering algorithm updates. Is there something Google is missing right now that Cutts is going to look back on, and wonder why Google didn’t do something earlier?

    Would you say that Google’s results are better as a result of its actions against paid links and content farms? What do you think Google’s biggest mistake has been? Let us know in the comments.

  • Google: Links Will Become Less Important

    Links are becoming less important as Google gets better at understanding the natural language of users’ queries. That’s the message we’re getting from Google’s latest Webmaster Help video. It will be a while before links become completely irrelevant, but the signal that Google’s algorithm was basically based upon is going to play less and less of a role as time goes on.

    Do you think Google should de-emphasize links in its algorithm? Do you think they should count as a strong signal even now? Share your thoughts.

    In the video, Matt Cutts takes on this user-submitted question:

    Google changed the search engine market in the 90s by evaluating a website’s backlinks instead of just the content, like others did. Updates like Panda and Penguin show a shift in importance towards content. Will backlinks lose their importance?

    “Well, I think backlinks have many, many years left in them, but inevitably, what we’re trying to do is figure out how an expert user would say this particular page matched their information needs, and sometimes backlinks matter for that,” says Cutts. “It’s helpful to find out what the reputation of a site or of a page is, but for the most part, people care about the quality of the content on that particular page – the one that they landed on. So I think over time, backlinks will become a little less important. If we could really be able to tell, you know, Danny Sullivan wrote this article or Vanessa Fox wrote this article – something like that, that would help us understand, ‘Okay, this is something where it’s an expert – an expert in this particular field – and then even if we don’t know who actually wrote something, Google is getting better and better at understanding actual language.”

    “One of the big areas that we’re investing in for the coming few months is trying to figure out more like how to do a Star Trek computer, so conversational search – the sort of search where you can talk to a machine, and it will be able to understand you, where you’re not just using keywords,” he adds.

    You know, things like this:

    Cutts continues,”And in order to understand what someone is saying, like, ‘How tall is Justin Bieber?’ and then, you know, ‘When was he born?’ to be able to know what that’s referring to, ‘he’ is referring to Justin Bieber – that’s the sort of thing where in order to do that well, we need to understand natural language more. And so I think as we get better at understanding who wrote something and what the real meaning of that content is, inevitably over time, there will be a little less emphasis on links. But I would expect that for the next few years we will continue to use links in order to assess the basic reputation of pages and of sites.”

    Links have always been the backbone of the web. Before Google, they were how you got from one page to the next. One site to the next. Thanks to Google, however (or at least thanks to those trying desperately to game Google, depending on how you look at it), linking is broken. It’s broken as a signal because of said Google gaming, which the search giant continues to fight on an ongoing basis. The very concept of linking is broken as a result of all of this too.

    Sure, you can still link however you want to whoever you want. You don’t have to please Google if you don’t care about it, but the reality is, most sites do care, because Google is how the majority of people discover content. As a result of various algorithm changes and manual actions against some sites, many are afraid of the linking that they would have once engaged in. We’ve seen time after time that sites are worried about legitimate sites linking to them because they’re afraid Google might not like it. We’ve seen sites afraid to naturally link to other sites in the first place because they’re afraid Google might not approve.

    No matter how you slice it, linking isn’t what it used to be, and that’s largely because of Google.

    But regardless of what Google does, the web is changing, and much of that is going mobile. That’s a large part of why Google must adapt with this natural language search. Asking your phone a question is simply a common way of searching. Texting the types of queries you’ve been doing from the desktop for years is just annoying, and when your phone has that nice little microphone icon, which lets you ask Google a question, it’s just the easier choice (in appropriate locations at least).

    Google is also adapting to this mobile world by indexing content within apps as it does links, so you if you’re searching on your phone, you can open content right in the app rather than in the browser.

    Last week, Facebook made an announcement taking this concept to another level when it introduced App Links. This is an open source standard (assuming it becomes widely adopted) for apps to link to one another, enabling users to avoid the browser and traditional links altogether by jumping from app to app.

    It’s unclear how Google will treat App Links, but it would make sense to treat them the same as other links.

    The point is that linking itself is both eroding and evolving at the same time. It’s changing, and Google has to deal with that as it comes. As Cutts said, linking will still play a significant role for years to come, but how well Google is able to adapt to the changes in linking remains to be seen. Will it be able to deliver the best content based on links if some of that content is not being linked to because others are afraid to link to it? Will it acknowledge App Links, and if so, what about the issues that’ having? Here’s the “standard” breaking the web, as one guy put it:

    What if this does become a widely adopted standard, but proves to be buggy as shown above?

    Obviously, Google is trying to give you the answers to your queries on its own with the Knowledge Graph when it can. Other times it’s trying to fill in the gaps in that knowledge with similarly styled answers from websites. It’s unclear how much links fit into the significance of these answers. We’ve seen two examples in recent weeks where Google was turning to parked domains.

    Other times, the Knowledge Graph just provides erroneous information. As Cutts said, Google will get better and better at natural language, but it’s clear this is the type of search results it wants to provide whenever possible. The problem is it’s not always reliable, and in some cases, the better answer comes from good old fashioned organic search results (of the link-based variety). We saw an example of this recently, which Google ended up changing after we wrote about it (not saying it was because we wrote about it).

    So if backlinks will become less important over time, does that mean traditional organic results will continue to become a less significant part of the Google search experience? It’s certainly already trended in that direction over the years.

    What do you think? How important should links be to Google’s ranking? Share your thoughts in the comments.

    Images via YouTube, Google

  • Matt Cutts’ Floating Head Reminds You About Your Pages’ Body Content

    Google has released a new public service announcement about putting content in the body of webpages. Naturally, this features Matt Cutts’ head floating in the air.

    “It’s important to pay attention to the head of a document, but you should also pay attention to the body of a document,” he said. “Head might have meta description, meta tags, all that sort of stuff. If you want to put stuff in the head that’s great. Make sure it’s unique. Don’t just do duplicate content. But stuff in the body makes a really big difference as well. If you don’t have the text – the words that will really match on a page – then it’s going to be hard for us to return that page to users. A lot of people get caught up in descriptions, meta keywords, thinking about all those kinds of things. Don’t just think about the head. Also think about the body because the body matters as well.”

    I find it odd that Google would feel the need to make an announcement about this, but apparently people are forgetting about the body so much that it needed to be done.

    Come on.

    Image via YouTube

  • Google On Criteria For Titles In Search Results

    Google has talked about titles in search results in multiple videos in the past, but once again takes on the topic in the latest Webmaster Help video.

    They keep getting questions about it, so why not? In fact, Cutts shares two different questions related to titles in this particular video.

    “Basically, whenever we try to choose the title or decide which title to show in a search result, we’re looking for a concise description of the page that’s also relevant to the query,” Cutts says. “So there’s a few criteria that we look at. Number one, we try to find something that’s relatively short. Number two, we want to have a good description of the page, and ideally the site that the page is on. Number three, we also want to know that it’s relevant to the query somehow. So if your existing HTML title fits those criteria, then often times the default will be to just use your title. So in an ideal world it would accurately describe the page and the site, it would be relevant to the query, and it would also be somewhat short.”

    He continues, “Now, if your current title, as best as we can tell, doesn’t match that, then a user who types in something, and doesn’t see something related to their query, or doesn’t have a good idea about what exactly this page is going to be, is less likely to click on it. So in those kinds of cases, we might dig a little bit deeper. We might use content on your page. We might look at the links that point to your page, and incorporate some text from those links. We might even use the Open Directory Project to try to help figure out what a good title would be. But the thing to bear in mind is that in each of these cases, we’re looking for the best title that will help a user assess whether that’s what they’re looking for. So if you want to control the title that’s being shown, you can’t completely control it, but you can try to anticipate what’s a user going to type, and then make sure that your title reflects not only something about that query or the page that you’re on, but also includes sort of the site that you’re on, or tries to give some context so that the user knows what they’re going to get whenever they’re clicking on it.”

    Google offers tips for creating descriptive page titles in its help center here. It suggests making sure each page on your site has a title specified in the title tag, for starters. It says to keep them descriptive and concise, to avoid keywords stuffing, to avoid repeated or boilerplate titles, to brand your titles, and to be careful about disallowing search engines. It gets into significantly more detail about each of these things, as well as about how it generates titles when the site fails to meet the criteria.

    The page also includes this old video of Cutts talking about snippets in general:

    Here’s a video from five years ago in which Matt talks about changing titles as well:

    Image via YouTube

  • Google Drops Some ‘Upper Decker’ Knowledge (Courtesy Of Another Parked Domain)

    Remember that story from the other day about Google’s questionable “answers” as it relies on websites to fill in the gaps in “knowledge” that its proper Knowledge Graph can’t answer?

    Well this one’s just funny.

    Just think of the traffic Urban Dictionary is missing out on. Oh, and the source is a parked domain again. Seriously, read this.

    Thanks, phillytown.com/glossaryhtm!

    Via Gizmodo

    Image via Google

  • Google: Small Sites Can Outrank Big Sites

    Google: Small Sites Can Outrank Big Sites

    The latest Webmaster Help video from Google takes on a timeless subject: small sites being able to outrank big sites. It happens from time to time, but how can it be done? Do you have the resources to do it?

    Some think it’s simply a lost cause, but in the end, it’s probably just going to depend on what particular area your business is in, and if there are real ways in which you can set yourself apart from your bigger competition.

    Do you see small sites outranking big ones very often? Let us know in the comments.

    This time, Matt Cutts specifically tackles the following question:

    How can smaller sites with superior content ever rank over sites with superior traffic? It’s a vicious circle: A regional or national brick-and-mortar brand has higher traffic, leads to a higher rank, which leads to higher traffic, ad infinitum.

    Google rephrased the question for the YouTube title as “How can small sites become popular?”

    Cutts says, “Let me disagree a little bit with the premise of your question, which is just because you have some national brand, that automatically leads to higher traffic or higher rank. Over and over gain, we see the sites that are smart enough to be agile, and be dynamic, and respond quickly, and roll out new ideas much faster than these sort of lumbering, larger sites, can often rank higher in Google search results. And it’s not the case that the smaller site with superior content can’t outdo the larger sites. That’s how the smaller sites often become the larger sites, right? You think about something like MySpace, and then Facebook or Facebook, and then Instagram. And all these small sites have often become very big. Even Alta Vista and Google because they do a better job of focusing on the user experience. They return something that adds more value.”

    “If it’s a research report organization, the reports are higher quality or they’re more insightful, or they look deeper into the issues,” he continues. “If it’s somebody that does analysis, their analysis is just more robust.”

    Of course, sometimes they like the dumbed down version. But don’t worry, you don’t have to dumb down your content that much.

    “Whatever area you’re in, if you’re doing it better than the other incumbents, then over time, you can expect to perform better, and better, and better,” Cutts says. “But you do have to also bear in mind, if you have a one-person website, taking on a 200 person website is going to be hard at first. So think about concentrating on a smaller topic area – one niche – and sort of say, on this subject area – on this particular area, make sure you cover it really, really well, and then you can sort of build out from that smaller area until you become larger, and larger, and larger.”

    On that note, David O’Doherty left an interesting comment on the video, saying, “I can’t compete with Zillow, Trulia or Realtor on size so I try to focus on the smaller important details, neighborhoods, local events, stuff that matters to people. Focusing on a niche, creating trust with the visitors to your site, providing valuable original content is paramount to success. It’s not easy and takes time and I have a lot of help but it appears to be working.”

    “If you look at the history of the web, over and over again, you see people competing on a level playing field, and because there’s very little friction in changing where you go, and which apps you use, and which websites you visit, the small guys absolutely can outperform the larger guys as long as they do a really good job at it,” he adds. “So good luck with that. I hope it works well for you. And don’t stop trying to produce superior content, because over time, that’s one of the best ways to rank higher on the web.”

    Yes, apparently Google likes good content. Have you heard?

    Do you think the big sites can be outranked by little sites with enough good content and elbow grease? Share your thoughts in the comments.

    Image via YouTube

  • Google’s ‘Rules Of Thumb’ For When You Buy A Domain

    Google has a new Webmaster Help video out, in which Matt Cutts talks about buying domains that have had trouble with Google in the past, and what to do. Here’s the specific question he addresses:

    How can we check to see if a domain (bought from a registrar) was previously in trouble with Google? I recently bought, and unbeknownst to me the domain isn’t being indexed and I’ve had to do a reconsideration request. How could I have prevented?

    “A few rules of thumb,” he says. “First off, do a search for the domain, and do it in a couple ways. Do a ‘site:’ search, so, ‘site: domain.com’ for whatever it is that you want to buy. If there’s no results at all from that domain even if there’s content on that domain, that’s a pretty bad sign. If the domain is parked, we try to take parked domains out of our results anyways, so that might not indicate anything, but if you try do do ‘site:’ and you see zero results, that’s often a bad sign. Also just search for the domain name or the name of the domain minus the .com or whatever the extension is on the end because you can often find out a little of the reputation of the domain. So were people spamming with that domain name? Were they talking about it? Were they talking about it in a bad way? Like this guy was sending me unsolicited email, and leaving spam comments on my blog. That’s a really good way to sort of figure out what’s going on with that site or what it was like in the past.”

    “Another good rule of thumb is to use the Internet Archive, so if you go to archive.org, and you put in a domain name, the archive will show you what the previous versions of that site look like. And if the site looked like it was spamming, then that’s definitely a reason to be a lot more cautious, and maybe steer clear of buying that domain name because that probably means you might have – the previous owner might have dug the domain into a hole, and you just have to do a lot of work even to get back to level ground.”

    Don’t count on Google figuring it out or giving you an easy way to get things done.

    Cutts continues, “If you’re talking about buying the domain from someone who currently owns it, you might ask, can you either let me see the analytics or the Webmaster Tools console to check for any messages, or screenshots – something that would let me see the traffic over time, because if the traffic is going okay, and then dropped a lot or has gone really far down, then that might be a reason why you would want to avoid the domain as well. If despite all that, you buy the domain, and you find out there was some really scuzzy stuff going on, and it’s got some issues with search engines, you can do a reconsideration request. Before you do that, I would consider – ask yourself are you trying to buy the domain just because you like the domain name or are you buying it because of all the previous content or the links that were coming to it, or something like that. If you’re counting on those links carrying over, you might be disappointed because the links might not carry over. Especially if the previous owner was spamming, you might consider just going a disavow of all the links that you can find on that domain, and try to get a completely fresh start whenever you are ready to move forward with it.”

    Cutts did a video about a year ago about buying spamming domains advising buyers not to be “the guy who gets caught holding the bag.” Watch that one here.

    Image via YouTube

  • Google Penalizes PostJoint, Another Guest Blog Network

    Google has taken out another guest blog network. This time it’s PostJoint.

    Techtada tweeted about it to Matt Cutts (via Search Engine Land), who responded:

    Luana Spinetti says it can “thrive outside of Google”.

    PostJoint is no longer ranking for a search for its own name. What a great user experience and relevant results!

    The penalty comes after Google had already penalized MyBlogGuest. When that happened, PostJoint put up a blog post about how it was differerent, in which Saleem Yaqub wrote:

    We’ve always put quality first even if this means a smaller user base and lower revenues. We are selective about who we work with, and we moderate everything from user accounts, to links, content and participating sites (on average we decline 70% of sites that apply). We’ve always been concerned about footprints, so from day one we’ve had a unique no browsing approach, where nobody can browse or crawl through our site list or user base. Our technology is built from the ground up with a zero footprints principle in mind. Compare this to MBG which is essentially a modified forum that anyone could join and you’ll start to understand the fundamental differences. We work hard to filter out spam and sites made for SEO. Sometimes activity on PostJoint does include follow links but these are mostly surrounded by good content, good blogs, and good marketers. PostJoint is an independent intermediary, we facilitate the connections and streamline the process, but what the users ultimately do is their own choice.

    Apparently Google doesn’t care about all that.

    Saleem confirms the penalty in the comments of that post (via Search Engine Watch).

    As we’ve seen, Google has legitimate sites afraid of accepting guest blog posts, and some that do accept them afraid to link naturally.

    Image via PostJoint

  • Schema.org Enters Its ‘Next Chapter’

    Schema.org Enters Its ‘Next Chapter’

    In 2011, Google, Microsoft (Bing) and Yahoo, the big three search engines (Yandex later joined), teamed up to launch Schema.org, an initiative to support a common set of schemas for structured data markup on webpages.

    This week, the companies announced the introduction of vocabulary to let sites describe actions they enable and how said actions can be invoked.

    “When we launched schema.org almost 3 years ago, our main focus was on providing vocabularies for describing entities — people, places, movies, restaurants, … But the Web is not just about static descriptions of entities. It is about taking action on these entities — from making a reservation to watching a movie to commenting on a post,” says a blog post from Google’s Jason Douglas and Sam Goto, Microsoft’s Steve Macbeth and Jason Johnson, Yandex’s Alexander Shubin, and Yahoo’s Peter Mika.

    They refer to the new vocabulary as “the next chapter of schema.org and structured data on the web.”

    “The new actions vocabulary is the result of over two years of intense collaboration and debate amongst the schema.org partners and the larger Web community,” they write. “Many thanks to all those who participated in these discussions, in particular to members of the Web Schemas and Hydra groups at W3C. We are hopeful that these additions to schema.org will help unleash new categories of applications.”

    A couple years ago, Google’s Matt Cutts put out a video discussing schema.org markup as a ranking signal.

    “Just because you implement schema.org doesn’t mean you necessarily rank higher,” he said. “But there are some corner cases like if you were to type in ‘lasagna,’ and then click over on the left-hand side and click on ‘recipes,’ that’s the sort of thing where using schema.org markup might help, because then you’re more likely to be showing up in that at all. So there are some cases where it can be helpful to use schema.org markup.”

    Here’s an overview document that covers what exactly is changing.

    In February, Schema.org introduced sorts vocabulary. A couple months prior to that, it announced markup for TV and radio.

    Via SemanticWeb

    Image via Schema.org

  • Cutts Talks SEO ‘Myths,’ Says To Avoid ‘Group Think’

    In the latest “Webmaster Help” video, Matt Cutts talks about SEO “myths”. He responds to this question:

    What are some of the biggest SEO Myths you see still being repeated (either at conferences, or in blogs, etc.)?

    There are a lot of them, he says.

    “One of the biggest, that we always hear,” he says, “is if you buy ads, you’ll rank higher on Google, and then there’s an opposing conspiracy theory, which is, if you don’t buy ads, you’ll rank better on Google, and we sort of feel like we should get those two conspiracy camps together, and let them fight it all out, and then whoever emerges from one room, we can just debunk that one conspiracy theory. There’s a related conspiracy theory or myth, which is that Google makes its changes to try to drive people to buy ads, and having worked in the search quality group, and working at Google for over thirteen years, I can say, here’s the mental model you need to understand why Google does what it does in the search results. We want to return really good search results to users so that they’re happy, so that they’ll keep coming back. That’s basically it. Happy users are loyal users, and so if we give them a good experience on one search, they’ll think about using us the next time they have an information need, and then along the way, if somebody clicks on ads, that’s great, but we’re not gonna make an algorithmic change to try to drive people to buy ads. If you buy ads, it’s not going to algorithmically help your ranking in any way, and likewise it’s not going to hurt your ranking if you buy ads.”

    Google reported its quarterly earnings yesterday with a 21% revenue increase on the company’s own sites (like its search engine) year-over-year. Paid clicks were up 26% during that time.

    Cutts continues with another “myth”.

    “I would say, just in general, thinking about the various black hat forums and webmaster discussion boards, never be afraid to think for yourself. It’s often the case that I’ll see people get into kind of a ‘group think,’ and they decide, ‘Ah ha! Now we know that submitting our articles to these article directories is going to be the best way to rank number one.’ And then six months later, they’ll be like, ‘OK, guest blogging! This is totally it. If you’re guest blogging, you’re gonna go up to number one,’ and a few months before that, ‘Oh, link wheels. You gotta have link wheels if you’re gonna rank number one,’ and it’s almost like fad.”

    To be fair, some of this “group think” stuff has worked for some sites in the past until Google changed its algorithm to stop them from working .

    He suggests that if somebody really had a “foolproof” way to make money online, they’d probably use it to make money rather than putting it in an e-book or tool, and selling it to people.

    “The idea that you’re going to be able to buy some software package, and solve every single problem you’ve ever had is probably a little bit of a bad idea,” he says.

    “It’s kind of interesting how a lot of people just assume Google’s thinking about nothing but the money as far as our search quality, and truthfully, we’re just thinking about how do we make our search results better,” he says.

    Google’s total revenue for the quarter was up 19% year-over-year, which still wasn’t enough to meet investors’ expectations.

    Image via YouTube

  • Cutts On 404s Vs. 410s: Webmasters Shoot Often Themselves In The Foot

    Google’s latest Webmaster Help video, unlike the one before it, is very webmaster oriented. In it, Matt Cutts discusses how Google handles 404s versus how it handles 410s.

    “Whenever a browser or Googlebot asks for a page, the web server sends back a status code,” he says. ‘200 might mean everything went totally fine. 404 means the page was not found. 410 typically means ‘gone,’ as in the page is not found, and we do not expect it to come back. So 410 has a little more of a connotation that this page is permanently gone. So the short answer is that we do sometimes treat 404s and 410s a little bit differently, but for the most part, you shouldn’t worry about it. If a page is gone, and you think it’s temporary, go ahead and use a 404. If a page is gone, and you know no other page that should substitute for it…you don’t have anywhere else that you should point to, and you know that that page is gone and never coming back, then go ahead and serve a 410.”

    “It turns out, webmasters shoot themselves in the foot pretty often,” he continues. “Pages go missing, people misconfigure sites, sites go down, people block Googlebot by accident, people block regular users by accident…so if you look at the entire web, the crawl team has to design to be robust against that. So 404, along with, I think, 401s and maybe 403s, if we see a page, and we get a 404, we are gonna protect that page for 24 hours in the crawling system. So we sort of wait, and we say, ‘Well, maybe that was a transient 404. Maybe it wasn’t really intended to be a page not found.’ And so in the crawling system, it will be protected for 24 hours. If we see a 410, then the crawling system says, ‘OK, we assume the webmaster knows what they’re doing because they went off the beaten path to deliberately say that this page is gone.’ So they immediately convert that 410 to an error, rather than protecting it for 24 hours.”

    “Don’t take this too much the wrong way, Cutts adds. “We’ll still go back and recheck, and make sure, are those pages really gone or maybe the pages have come back alive again, and I wouldn’t rely on the assumption that that behavior will always be exactly the same. In general, sometimes webmasters get a little too caught up in tiny little details, and so if a page is gone, it’s fine to serve a 404. If you know it’s gone for real, it’s fine to serve a 410, but we’ll design our crawling system to try to be robust, but if your site goes down, or if you get hacked or whatever, that we try to make sure than we can still find the good content whenever it’s available.”

    He also notes that these details can change. Long story short, don’t worry about it that much.

    Image via YouTube

  • Google Considers Making SSL A Ranking Signal

    About a month ago, Google’s head of webspam Matt Cutts said at the Search Marketing Expo that he’d like to see Google make SSL site encryption a signal in Google’s ranking algorithm.

    Barry Schwartz at SMX sister site Search Engine Land wrote at the time, “Let me be clear, Matt Cutts, Google’s head of search spam, did not say it is or it will be part of the ranking algorithm. But he did say that he personally would like to see it happen in 2014. Matt Cutts is a senior Google search engineer that has opinions that matter, so I wouldn’t be surprised if Google does announce in 2014 that this is a ranking factor – but it is far off and may never happen.”

    It doesn’t look like anything new has really happened with this yet, but the Wall Street Journal has a new report out reaffirming Cutts’ desire for such a signal:

    Cutts also has spoken in private conversations of Google’s interest in making the change, according to a person familiar with the matter. The person says Google’s internal discussions about encryption are still at an early stage and any change wouldn’t happen soon.

    A Google spokesman said the company has nothing to announce at this time.

    Search Engine Land’s Danny Sullivan is quoted in the article, and makes a pretty valid point that Google adopting such a signal could “cause an immediate change by all the wrong sites” – those specifically trying to game Google.

    Of course as head of webspam, something tells me Cutts has considered this. If it is to become a signal, it’s likely not going to carry a huge amount of weight. Google will still always want to provide the best user experience and content to users. At least that’s what their official stance will be.

    Even if the motivation is to improve search rankings, sites making themselves more secure can’t be a bad thing (until it is). But then, one has to wonder if Google will launch another algorithm update to penalize sites that are making themselves more secure just to influence search rankings just as it penalizes those who try to get links to get better search rankings. I wonder how that would work.

    Image via YouTube

  • Matt Cutts Gets New ‘Melody’ Treatment

    Okay, this exists.

    This comes from HighPosition.com. I randomly came across it on StumbleUpon, and it hardly has any views yet, so let’s change that. If you watch Matt Cutts’ videos regularly, you owe this one to yourself.

    You can watch it in a more theatrical setting here.

    And don’t forget to check out the Matt Cutts Donkey Kong and Whack-a-Mole games. Oh, and of course this classic:

    Wow, it just dawned on me that we’ve been covering these Matt Cutts video for an absurdly long time.

    Image via YouTube

  • Google Hits Japanese Link Networks

    Google Hits Japanese Link Networks

    Google’s head of webspam tweeted early Tuesday morning that Google has taken action on seven Japanese link networks over the past few months.

    He’s proud of the team:

    This revelation follows other announcements made by Cutts in recent months, which have seen Google on the warpath with link networks, mostly in Europe. Last month, Cutts mentioned taking action on a couple of German networks.

    Back in January, Google took action on French network Buzzea, which was followed by action against networks in Germany, Poland, and then warned Spanish and Italian networks.

    In addition to all of this, of course, has been Google’s attack on guest blogging, with notable guest blog network MyBlogGuest getting famously penalized. .

    Image via YouTube