WebProNews

Tag: Search

  • Bing for Mobile Gets New Features

    Bing for Mobile Gets New Features

    Bing has launched some new mobile features, including Facebook sharing, news, maps/list split view, search history, and trending topics.

    Users can now share images, local business details and apps (that one’s just for iOS) with Facebook friends on the go. The news feature lets you get news from Bing for Mobile’s browse home page. “For headline news, we added a carousel that lets you to quickly flip through the headlines so you’ll never miss a beat or scroll down to see the top three headlines and images for numerous categories,” Bing says. “The categories are the same you have come to expect from Bing News on your PC, including U.S, World, Local (state), Entertainment, Science/Technology, Business Politics, Sports, and Health. This feature is currently in the US only.”

    Bing Mobile sharing features

    Bing Mobile News Features

    The maps/list split view lets you view both the map and business listings/directions in a single view. As you interact with the list, the map will show the business or direction point you’re working with.

    Maps List Split view

    The search history is self-explanatory, as are trending topics, which are only available in the U.S.

    The updates are available on devices that support HTML5 capable browser (iPhone, Android, RIM), and non-touch RIM devices supporting RIM OS 6.0 and higher get a new experience with non-touch gestures that correspond to the touch gestures.

  • Ranking Google Ranking Factors By Importance

    Rand Fishkin and SEOmoz polled 132 SEO experts with data from over 10,000 Google search results, and have attempted to rank the importance of ranking signals. It’s not confirmed fact, obviously. Google won’t provide such information, but I suppose the next best thing is the collective opinion of a large group of people who make their livings getting sites to rank in search engines, and Fishkin has put together an impressive presentation.

    Do you think Google is ranking search results effectively? Comment here.

    You can view the entire presentation here, but I’ve pulled out a few key slides that basically sum up the findings.

    The factors are actually broken down into the following subsets, where each is ranked against other related factors: overall algorithmic factors, page-specific link signals, domain-wide link signals, on-page signals, domain name match signals, social signals, and highest positively + negatively correlated metrics overall.

    The results find that page-level link metrics are the top algorithmic factors (22%), followed by domain-level, link authority features (21%). This is similar to the same SEOmoz poll for 2009, but there is a huge difference in the numbers, indicating that experts are less certain that page-level link metrics are as important. In 2009, they accounted for 43%.

    Search Ranking Factors

    Page-specific link signals are cited as metrics based on links that point specifically to the ranking page. This is how the results panned out there:

    Page-specific linking factors

    According to Fishkin, the main takeaways here are that SEOs believe the power of links has declined, that diversity of links is greater than raw quantity, and that the exact match anchor text appears slightly less well-correlated than partial anchor text in external links.

    Domain-wide link signals are cited as metrics based on links that point to anywhere on the ranking domain. Here is what the poll looked like in this department:

    Domain Level linking factors

    The report compares followed vs. nofollowed links to the domain and page, finding that nofollow links may indeed help with rankings:

    Nofollow

    On-page signals are cited as metrics based on keyword usage and features of the ranking document. Here’s what the poll looked like on these:

    on-page factors

    Fishkin determines that while it’s tough to differentiate with on-page optimization, longer documents tend to rank better (possibly as a result of Panda), long titles and URLs are still likely bad for SEO, and using keywords earlier in tags and docs “seems wise”.

    Here is how the domain name extensions in search results shook out:

    Domain extensions

    Here are the poll results on social-media-based ranking factors (which Google has seemingly been putting more emphasis on of late):

    Social Factors

    Fishkin suggests that Facebook may be more influential than Twitter, or that it might simply be that Facebook data is more robust and available for URLs in SERPs. He also determines that Google Buzz is probably not in use directly, as so many users simply have their tweet streams go to Buzz (making the data correlation lower). He also notes that there is a lot more to learn about how Google uses social.

    Andy Beard has been testing whether links posted in Google Buzz pass PageRank or help with indexing of content since February 2010. He is now claiming evidence that Buzz is used for indexing.

    Danny Sullivan asked Google’s Matt Cutts about the SEOmoz ranking factors survey in a Q&A session at SMX Advanced this week – specifically about the correlation between Facebook shares and Google rankings. Cutts is quoted as saying, “This is a good example of why correlation doesn’t equal causality because Google doesn’t get Facebook shares. We’re blocked by that data. We can see fan pages, but we can’t see Facebook shares.”

    The SEOmoz presentation itself has a lot more info about the methodology used and how the correlation worked out.

    All of the things covered in the presentation should be taken into consideration, particularly for sites that have experienced significant drops in rankings (because of things like the Panda update or other algorithm tweaks). We recently discussed with Dani Horowitz of Daniweb a number of other things sites can also do that may help rankings in the Post-panda Google search index. DaniWeb had been hit by Panda, but has seen a steady uptick in traffic since making some site adjustments, bringing up the possibility of Panda recovery.

    Barry Schwartz at Search Engine Roundtable polled his readers about Panda recovery, and 4% said they had fully recovered, while more indicated that they had recovered partially. Still, the overwhelming majority had not recovered, indicating that Google probably did its job right for the most part (that’s not to say that some sites that didn’t deserve to get hit didn’t get hit). In that same Q&A session, Cutts said, “The general rule is to push stuff out and then find additional signals to help differentiate on the spectrum. We haven’t done any pushes that would directly pull things back. We have recomputed data that might have impacted some sites. There’s one change that might affect sites and pull things back.”

    A new adjustment to the Panda update has been approved at Google, but has not rolled out yet, he says. This adjustment will be aimed at keeping scraped content from ranking over original content.

    Home Page Content

    There have also been other interesting bits of search-related information coming out of Google this week. Cutts posted a Webmaster Central video talking about the amount of content you should have on your homepage.

    “You can have too much,” said Cutts. “So I wouldn’t have a homepage that has 20MB. You know, that takes a long time to download, and users who are on a dial-up or a modem, a slow connection, they’ll get angry at you.”

    “But in general, if you have more content on a home page, there’s more text for Googlebot to find, so rather than just pictures, for example, if you have pictures plus captions – a little bit of textual information can really go a long way,” he continued.

    “If you look at my blog, I’ve had anywhere from 5 to 10 posts on my main page at any given time, so I tend to veer towards a little more content when possible,” he added.

    Who You Are May Count More

    Who you are appears to be becoming more important in Google. Google announced that it’s supporting authorship markup, which it will use in search results. The company is experimenting with using the data to help people find content from authors in results, and says it will continue to look at ways it could help the search engine highlight authors and rank results. More on this here.

    Search Queries Data from Webmaster Tools Comes to Google Analytics

    Google also launched a limited pilot for search engine optimization reports in Google Analytics, tying Webmaster Central data to Google Analytics, after much demand. It will use search queries data from WMT, which includes:

  • Queries: The total number of search queries that returned pages from your site results over the given period. (These numbers can be rounded, and may not be exact.)
  • Query: A list of the top search queries that returned pages from your site.
  • Impressions: The number of times pages from your site were viewed in search results, and the percentage increase/decrease in the daily average impressions compared to the previous period. (The number of days per period defaults to 30, but you can change it at any time.)
  • Clicks: The number of times your site’s listing was clicked in search results for a particular query, and the percentage increase/decrease in the average daily clicks compared to the previous period.
  • CTR (clickthrough rate): The percentage of impressions that resulted in a click to your site, and the increase/decrease in the daily average CTR compared to the previous period.
  • Avg. position: The average position of your site on the search results page for that query, and the change compared to the previous period. Green indicates that your site’s average position is improving.To calculate average position, we take into account the ranking of your site for a particular query (for example, if a query returns your site as the #1 and #2 result, then the average position would be 1.5).
  • This week, we also ran a very interesting interview between Eric Enge and Bill Slawski addressing Google search patents and how the might relate to the Google Panda update.

    Back to the SEOmoz data. Do you think the results reflect Google’s actual algorithm well? Tell us what you think.

  • New Google Panda Update Approved, On the Way

    New Google Panda Update Approved, On the Way

    Google’s Matt Cutts spoke in a Q&A session with Danny Sullivan at SMX Advanced this week, and discussed the Panda update, among other things.

    A lot of sites have been critical of Google for returning results that are scraped versions of their orignal content. Cutts is quoted as saying in a liveblog of the session, “A guy on my team [is] working on that issue. A change has been approved that should help with that issue. We’re continuing to iterate on Panda. The algorithm change originated in search quality, not the web spam team.”

    He says there’s another change coming soon, and that he still doesn’t know when Panda will be launched fully internationally (in other languages). He also says they haven’t made any manual exceptions with Panda.

    You may recall that the Mac blog Cult of Mac was hit by the original Panda update, and then after exchanging some dialogue with Google the site ended up getting some new traffic. Matt says, however, “We haven’t made any manual exceptions. Cult of Mac might have been confused because they started getting all this new traffic from blogging about it, but we haven’t made any manual exceptions.”

    Yesterday we looked at some poll results from Search Engine Roundtable that found 4% of sites were saying they had fully recovered from the Panda update. Some other sites have been finding partial recovery.

    Search Engine Roundtable Shares Panda Poll

    Image credit: Search Engine Roundtable

    On the prospect of sites having recovered from the update, Matt is quoted as saying, “The general rule is to push stuff out and then find additional signals to help differentiate on the spectrum. We haven’t done any pushes that would directly pull things back. We have recomputed data that might have impacted some sites. There’s one change that might affect sites and pull things back.”

    You may also recall Google’s list of questions that webmasters could use to assess the quality of their content. Cutts talked briefly about those questions, saying, “It could help as we recompute data.”

    He also said that what is being called “Panda 2.2” has been approved but has not yet been rolled out. “If we think you’re relatively high quality, Panda will have a smaller impact. If you’re expert enough and no one else has the good content, even if you’ve been hit by Panda that page can still rank.”

    That says a lot about original content.

  • Bing Webmaster Tools Refreshed

    Bing Webmaster Tools Refreshed

    Bing has launched some enhancements to Bing Webmaster Tools in an update called “Honey Badger”.

    “Today’s redesign offers webmasters a simplified experience that allows them to quickly analyze and identify trends – while also bringing new and unique features to the industry,” a representative for Bing tells WebProNews. “Our goal is to help webmaster make faster, more informed decisions and drive new insights about their website by presenting them with rich visuals and more organized, relevant content.”

    Enhancements include:

    • Crawl delay management: Lets webmasters configure the bingbot crawl rate for a specific domain.
    • Index Explorer: Allows webmasters to the ability to access data in the Bing index regarding a specified domain.
    • User and Role Management: Provides site owners with the ability to grant admin, read/write or read-only access to other users for their site.

    Crawl deal is configurable by hour. Users can ask Bing to crawl slower during peak business hours or have it crawl faster during off-peak hours. There is drag-and-drop functionality that lets users create a crawl graph by clicking and dragging the mouse pointer across the graph. Individual columns can also be clicked for fine-tuning.

    Bing Crawl Settings

    Index Explorer, Bing says, is a “complete rewrite” of the Index Tracker backend, focusing on freshness, performance, extensibility, reduced machine footprint, and stability and failure detection. New sites will have this data as they sign up.

    Bing Index Explorer

    The company also launched the ability for webmasters to manage deep-links and added over 40 new educational documents and videos to the Toolbox site. The content covers things like: using Webmaster Tools, data explanation, link building, removing/blocking pages from Bing’s index, SEO guidance, managing URL parameters, rich snippets (schema.org), canonicalization, nofollow, managing redirects, 404 page management, etc.

    Bing says you can “count on more monthly content being added” to Webmaster Tools in the near future.

  • Google on How Much Content You Should Have On Your Home Page

    The latest Google Webmaster Central video from Matt Cutts talks about home page content. Given issues like content depth and site speed, which Google has brought up a great deal in recent memory, the content on your home page is worth considering with regard to these things as well.

    The question from a user, which Matt addresses is: “More or less content on a homepage?”

    Today’s webmaster video: How much content should be on a homepage? http://goo.gl/SE9ss 1 hour ago via web · powered by @socialditto

    “You can have too much,” says Cutts. “So I wouldn’t have a homepage that has 20MB. You know, that takes a long time to download, and users who are on a dial-up or a modem, a slow connection, they’ll get angry at you.”

    “But in general, if you have more content on a home page, there’s more text for Googlebot to find, so rather than just pictures, for example, if you have pictures plus captions – a little bit of textual information can really go a long way,” he continues.

    “If you look at my blog, I’ve had anywhere from 5 to 10 posts on my main page at any given time, so I tend to veer towards a little more content when possible,” he adds.

    You can see Matt’s blog here, if you want a better idea of how he does it.

  • 4% Say Sites Fully Recovered from Panda Update

    We recently ran an interview with Dani Horowitz, who runs Daniweb, an IT discussion forum that got pounded by Google’s Panda update. Horowitz told us about some things she’s been doing to the site, which have led to a consistent uptick in search traffic post-Panda.

    While nobody ever claimed the site had gone through a full recovery, we had seen other stories about some sites hit by Panda getting some search traffic back. While Dani noted, “We’re still nowhere near where we were before,” she said she was seeing improvements week after week.

    The main takeaway from the discussion was that there is hope if you’ve been hit by Panda. You can still do things that help your content rank better. Google has openly discussed some of them.

    Barry Schwartz at Search Engine Roundtable polled his readers to see how many have experienced recovery from the Panda update. Over 500 responded, and 4% said they have recovered fully, while another 8% said they’ve recovered, but not fully.

    Search Engine Roundtable Shares Panda Poll
    Image credit: Search Engine Roundtable

    Obviously this is not representative of the entire web and how sites have performed in Google. It’s a simple poll of 500 people who presumably pay fairly close attention to the search industry. You also have to take into consideration that a lot of people think they were hit by Panda, but may have actually been hit by other less-publicized updates.

    Either way, it’s likely accurate in that the majority sites hit have not recovered. If most sites recovered, it would be indicative that Google had not done its job very well. Either that or that tons of low quality content sites have shifted dramatically in favor of truly great content.

  • Google Sacrifices Search Quality to Preserve Open Web

    Google Sacrifices Search Quality to Preserve Open Web

    Google has pulled its search engine at google.kz out of Kazakhstan, where the country’s government is requiring all .kz domain names to be operated from servers located in the country. Now when you go to google.kz, you’re redirected to google.com/webhp?hl=kk.

    Google says its users in kazakhstan may see a decrease in search quality, but that the company does not want to contribute to a fractured Internet. Here is the entire explanation from Google SVP, Research & Systems Infrastructure, Bill Coughan, as posted on the official Google Blog:

    The genius of the Internet has always been its open infrastructure, which allows anyone with a connection to communicate with anyone else on the network. It’s not limited by national boundaries, and it facilitates free expression, commerce and innovation in ways that we could never have imagined even 20 or 30 years ago.

    Some governments, however, are attempting to create borders on the web without full consideration of the consequences their actions may have on their own citizens and the economy. Last month, the Kazakhstan Network Information Centre notified us of an order issued by the Ministry of Communications and Information in Kazakhstan that requires all .kz domain names, such as google.kz, to operate on physical servers within the borders of that country. This requirement means that Google would have to route all searches on google.kzto servers located inside Kazakhstan. (Currently, when users search on any of our domains, our systems automatically handle those requests the fastest way possible, regardless of national boundaries.)

    We find ourselves in a difficult situation: creating borders on the web raises important questions for us not only about network efficiency but also about user privacy and free expression. If we were to operate google.kz only via servers located inside Kazakhstan, we would be helping to create a fractured Internet. So we have decided to redirect users that visit google.kz to google.com in Kazakh. Unfortunately, this means that Kazakhstani users will experience a reduction in search quality as results will no longer be customized for Kazakhstan.

    Measures that force Internet companies to choose between taking actions that harm the open web, or reducing the quality of their services, hurt users. We encourage governments and other stakeholders to work together to preserve an open Internet, which empowers local users, boosts local economies and encourages innovation around the globe.

    It’s not that surprising that Google would make such a move, as the company has promoted an “open” web consistently for years, and after the ordeal with China, they made it clear that they’re not above pulling out of a country, and frankly, China is a much bigger economy than Kazakhstan.

    Still, it is interesting that this comes at the sacrifice of search quality, and that Google is openly pointing this out, at a time when Google’s search quality has been heavily criticized and iterated upon relentlessly by the company, with recent algorithm updates.

  • rel=”author” is Same-Site Only

    rel=”author” is Same-Site Only

    I managed to ping Google’s @mattcutts after the announcement of rel=”author” support from Google on Twitter and he clarified the use case a little.

    As Twitter’s search is still so terrible at finding things I am adding the conversation here.

    New rel=”author” support http://goo.gl/FCK3l ( @mattcutts is this suitable for cross domain attribution too for syndicated content? ) 21 hours ago via web · powered by @socialditto

    @AndyBeard for now it’s same-site, just to be safe. My (personal) guess is we’ll see if that can be expanded over time in a trusted way. 20 hours ago via web · powered by @socialditto

    @mattcutts thanks for the clarification & intended current use 20 hours ago via web · powered by @socialditto

    @AndyBeard sure thing. Remember, rel=canonical also started as same-site only, then as we trusted it more, it became cross-site. 20 hours ago via web · powered by @socialditto

    @mattcutts I can’t sneak a rel=”canonical” into an author bio link, or ask content partners such as @WebProNews to include it 20 hours ago via web · powered by @socialditto

    My last point is at least partially related to Google’s Panda update because it is quite frequently seen, possibly more than before, that original content doesn’t rank yet scraped copies of it does.

    There are reasons why that happens, but a microformat rel=”author” and possibly something new… rel=”original” for a link to the canonical source would be useful.

    Something like this would be easier to implement than the metatag alternative currently in testing with newspapers . ( original-source & syndication-source )

    This is something really easy to get implemented in a number of CMSs, though in most cases it would be theme dependent not something that is part of core.

    Originally published at Internet Business & Marketing Strategy

  • Search Engine Patents and Panda

    Search Engine Patents and Panda

    Bill Slawski is the president and founder of SEO by the Sea, and has been engaging in professional SEO and internet marketing consulting since 1996. With a Bachelor of Arts Degree in English from the University of Delaware, and a Juris Doctor Degree from Widener University School of Law, Bill worked for the highest level trial Court in Delaware for 14 years as a court manager and administrator, and as a technologist/management analyst. While working for the Court, Bill also began to build and promote web pages, and became a full time SEO in 2005. Working on a wide range of sites, from Fortune 500 to small business pages, Bill also blogs about search engine patents and white papers on his seobythesea.com blog.

    What are the Most Likely Signals Used by Panda?

    Eric Enge: Let’s chat about some of the patents that might be playing a role in Panda 1, 2, 3, 4, 5, 6, 7 and beyond. I would like to get your thoughts on what signals are used for measuring either content quality or user engagement.

    Bill Slawski: I’ve been looking at sites impacted by Panda. I started from the beginning with remedial SEO. I went through the sites, crawled through them, looked for duplicate content issues within the same domain, looked for things that shouldn’t be indexed that were, and went through the basic list that Google provides in their Webmaster Tools area.

    In the Wired interview with Amit Singhal and Matt Cutts regarding this update, they mentioned an engineer named Panda. I found his name on the list of papers written by Googlers and read through his material. I also found three other tool and systems engineers named Panda, and another engineer who writes about information retrieval and architecture. I concluded that the Panda in question was the person who worked on the PLANET paper (more on this below).

    For signals regarding quality, we can look to the lists of questions from Google. For example, Does your web site read like a magazine? Would people trust you with their credit card? There are many things on a web site that might indicate quality and make the page seem more credible and trustworthy and lead the search engine to believe it was written by someone who has more expertise.

    The way things tend to be presented on pages, for instance where eight blocks are shown, may or may not be signals. If we look at the PLANET whitepaper “Massively Parallel Learning of Tree Ensembles with MapReduce” its focus isn’t so much on reviewing signals with quality or even user feedback but, rather, how Google is able to take a machine learning process dealing with decision trees and scale it up to use multiple computers at the same time. They could put many things in memory and compare one page against another to see if certain features and signals appear upon those pages.

    Eric Enge: So, the PLANET whitepaper described how to take a process, which before was constrained to a one computer machine learning process, and put it into a distributed environment to gain substantially more power. Is that a fair assessment?

    Bill Slawski: That would be a fair assessment. It would use the Google file system and Google’s MapReduce. It would enable them to draw many things into memory to compare to each other and change multiple variables at the same time. For example, a regression model type approach.

    Something that may have been extremely hard to use on a very large dataset becomes much easier when it can scale. It’s important to think about what shows up on your web page as a signal of quality.

    It’s possible that their approach is to manually identify pages that have quality, content quality, presentation, and so on and use those as a seed set to use with the machine learning process. To identify other pages, and how well they may rank in terms of these different features, makes it harder for us to determine expressly which signals the search engines are looking for.

    If they are following this PLANET-type approach in Panda with the machine learning, there may be other things mixed in. It is hard to tell. Google may not have solely used this approach. They may have tightened up phrase-based indexing and made that stronger in a way that helps rank and re-rank search results.

    Panda may be a filter on top of those where some web sites are promoted and other web sites are demoted based upon some type of quality signal score.

    It appears that Panda is a re-ranking approach. It’s not a replacement for relevance and Page Rank and the two hundred plus signals we are used to hearing about from Google. It may be a filter on top of those where some web sites are promoted and other web sites are demoted based upon some type of quality signal score.

    Eric Enge: That’s my sense of it also. Google uses the term classifier so you could imagine, either before running the basic algorithm or after, it is similar to a scale or a factor up or down.

    Bill Slawski: Right. That’s what it seems like.

    Page Features as an Indicator of Quality

    Eric Enge: You shared another whitepaper with me which dealt with sponsored search. Does that whitepaper add any insight into Panda? The PLANET paper followed up on an earlier paper on sponsored search which covered predicting bounce rates on ads. It Looked at the landing pages those ads brought you to based upon features found on the landing pages.

    They used this approach to identify those features and then determined which ones were higher quality based upon their feature collection. Then they could look at user feedback, such as bounce rates, to see how well they succeeded or failed. This may lead to metrics such as the percentage of the page above the fold which has advertising on it.

    Bill Slawski: Now you are talking about landing pages so many advertisers may direct someone to an actual page where they can conduct a transaction. They may bring them to an informational page, or an informational light page, that may not be as concerned with SEO as it is with calls to action, signals of reassurance using different logos, and symbols that you would get from the security statistical agencies.

    That set of signals is most likely different from what you would find on a page that was built for the general public or for search engines. However, if you go back to the original PLANET page they said, “this is sort of our proof of concept, this sponsored search thing. If it works with that it can work well with other very large datasets in places like organic search.”

    Eric Enge: So, you may use bounce rate directly as a ranking signal but when you have newer information to deal with why not predict it instead?

    Bill Slawski: Right. If you can take a number of features out of a page and use them in a way that gives them a score, and if the score can match up with bounce rate and other user engagement signals, chances are a feature-based approach isn’t a bad one to take. Also, you can use the user behavior data as a feedback mechanism to make sure you are doing well.

    Eric Enge: So, you are using the actual user data as a validator rather than a signal. That’s interesting.

    Bill Slawski: Right. You could do the same thing with organic search which, to a degree, they did that with blocked pages signal. This is where 85% of pages that were blocked were also pages that had lower quality scores. You can also look at other signals, for example, long clicks.

    Eric Enge: Long clicks, what’s that?

    Bill Slawski: I dislike the term bounce rate because it, by itself, doesn’t conclusively infer that someone visits the page and then leaves in under a few seconds. It implies that someone goes to a page, looks at it, spends time on it, and then leaves without going somewhere else. A long click is when you go to a page and you actually spend time there.

    Eric Enge: Although, you don’t know whether or not they spent time there because they had to deal with a phone call.

    Bill Slawski: Or, they opened something else up in a new tab and didn’t look at it for a while. There are other things that could measure this and ways to confirm agreement with it, such as how far someone scrolls that page.

    Eric Enge: Or, if they print the page.

    Bill Slawski: And clicks at the bottom of the page.

    Eric Enge: Or clicks on some other element. Could you track cursor movements?

    Bill Slawski: There have been a couple patents, even some from Google, on tracking cursor movements that they may possibly use someday. These could give them an indication of how relevant something may, or may not, be to a particular query.

    One patent is described as being used on a search results page, and it shows where someone hovers for a certain amount of time. If it’s a search result, you see if they hover over a one-box result which may give them an incentive to continue showing particular types of one-box results. That’s a possibility, mouse pointer tracking.

    Bounce Rates and Other User Behavior Signals

    Eric Enge: Getting back to the second whitepaper, what about using the actual ad bounce rate directly as a signal because that’s also potentially validating a signal either way?

    Bill Slawski: It’s not necessarily a bad idea.

    Eric Enge: Or low click through rates, right?

    Bill Slawski: As we said, user signals sometimes tend to be noisy. We don’t know why someone might stay on one page longer than others. We don’t know if they received a phone call, if they opened it up in a new tab, if they are showing someone else and have to wait for the person, or there are plenty of other reasons.

    You could possibly collect different user behavior signals even though they may be noisy and may not be an accurate reflection of someone’s interest. You could also take another approach and use the user behavior signals as feedback. To see how your methods are working, you have the option to have a wider range of different types of data to check against each other.

    Rather than having noisy user data be the main driver for your ranking… you look at the way content is presented on the page.

    Bill Slawski: That’s not a bad approach. Rather than have noisy user data be the main driver for your rankings, you find another method that looks at the way content is presented on a page. One area is segmentation of a page which identifies different sections of a page by looking at features that appear within those sections or blocks, and which area is the main content part of a page. It’s the part that uses full sentences, or sometimes sentence fragments, uses periods and traumas, capital letters at the beginning of lines or text. You use a Visual Gap Segmentation (White Space) type process to identify what might be an ad, what might be navigation, where things might be such as main content areas or a footer section. You look for features in sections.

    For instance, a footer section is going to contain a copyright notice and being able to segment a page like that will help you look for other signals of quality. For example, if an advertisement appears immediately after the first paragraph of the main content area you may say, “well, that’s sort of intrusive.” If one or two ads take up much of the main space, that aspect of the page may lead to a lower quality score.

    How the Search Engines Look at a Page

    Eric Enge: I understand how features may impact the search engine’s perception of a page’s quality, but that presumes they can unravel the CSS to figure out where things are really appearing.

    Bill Slawski: Microsoft has been writing white papers and patents on the topic of Visual Gaps Segmentation since 2003. Google had a patent called “Determining semantically distinct regions of a document” involving local search where they could identify blocks of text reviews for restaurants or other places that may be separated.

    For example, you have New York, a village voice article about restaurants in Greenwich Village, and it has ten paragraphs about ten different restaurants, starts with the name of the restaurant in each paragraph, and ends with the address, and in between is review.

    This patent said, “we can take that page, segment those reviews, and identify them with each of the individual restaurants,” and then two or three paragraphs sets they say, “we can also use the segmentation process in other ways like identifying different sections of a page, main content, a header, a footer, or so on.” Google was granted a patent on a more detailed page segmentation process about a month ago.

    Bill Slawski: Segmentation is probably part of this quality review, being able to identify and understand different parts of pages. They don’t just look at CSS. In the days where tables were used a lot you had the old table trick.

    You moved the content up and, depending on how you arranged a table, you could use absolute positioning. With CSS you can do the same type of thing, but the search engine is going to use some type of simulated browser. It doesn’t render a page completely, but it helps them give an idea if they look at the DOM (Document Object Model) model of a page.

    They look at some simulation of how the page will render, like an idea of where white space is, where HR tags might be throwing lines on the page, and so on. They can get a sense of what appears where, how they are separated, and then try to understand what each of those blocks does based upon linguistic-based features involving those blocks.

    Is it a set of multiple single word things that have links attached to them? For instance, each one is capitalized that might be main navigation. So, you can break up a page like that, you can look at where things appear. That could be a signal, a quality signal. You can see how they are arranged.

    The Search Engines Understand That There Are Different Types of Sites

    Eric Enge: Does the type of site matter?

    Bill Slawski: Most likely there is some categorization of types of sites so you are not looking at the same type of quality signals on the front page of a newspaper as you are on the front page of a blog or an ecommerce site.

    You can have different types of things printed on those different places. You are not going to get a TRUSTe badge on a blog, but you might on an ecommerce site. You look at the different features and realize that different genres, different types of sites, may have different ones associated with them.

    Eric Enge: Yes.

    Bill Slawski: That may have been derived when these seed quality sites were selected. There may have been some preprocessing to identify different aspects such as ecommerce site, labels, blog labels, and other things so whatever machine learning system they used could make distinctions between types of pages and see different types of features with them.

    It’s called a Decision Tree Process, and this process would look at a page and say, “is this a blog, yes or no? Is this a new site, yes or no?” It crawls along different pathways and asks questions to go crawl over that vital score.

    Eric Enge: Other things you can look at are markers of quality, such as spelling errors on the page. I think Zappos, if I remember correctly, is currently editing all their reviews because they’ve learned that spelling errors and grammar affect conversion. So, that’s a clear signal they could potentially use, and the number of broken links is another.

    Another area that’s interesting is when you come to a page and it is long block of text. There may be a picture on top, but that’s probably a good predictor of a high bounce rate. If it is a research paper, that’s one thing, but if it is a news article that is something else.

    Bill Slawski: Or, if it’s the Declaration of Independence.

    Eric Enge: Right, but they can handle that segmentation. If someone is looking for a new pair of shoes, and they come to a page with ten paragraphs of text and a couple of buttons to buy shoes, that’s a good predictor of a high bounce rate.

    Bill Slawski: On the other hand, if you have a page where there is a H1 header and a main heading at the top of the page, a couple of subheadings, a list, and some pictures that all appear to be meaningful to the content of the page, that would be a well-constructed article. It’s readable for the web, it’s easy to scan and it’s easy to locate different sections of the page that identify different concepts. This may make the page more interesting, more engaging, and keep people on a page longer.

    So, do these features translate to the type of user behavior where someone will be more engaged with the page and spend more time on it? Chances are, in many cases, they will.

    User Engagment Signals as a Validator

    Eric Enge: Another concept is user engagement signals standing by themselves may be noisy but ten of them collectively probably won’t be noisy. You could take ten noisy signals and if eight of them point in the same direction, then you’ve got a signal.

    Bill Slawski: They reinforce each other in a positive manner.

    Eric Enge: Then you are beginning to get something which is no longer a noisy signal.

    Bill Slawski: Right. For example, if you have a warehouse full of people, in an isolated area, printing out multiple copies of the same document over and over and over, because they think printing a document is a user behavior signal that the search engine might notice, you are wasting a lot of paper and a lot of time.

    In isolation that is going to look odd, it’s going to be an unusual pattern. The search engine is going to say, “someone is trying to do something they shouldn’t be doing.”

    Eric Enge: Yes. That can become a direct negative flag, and you must be careful because your competitor could do it to you. So, the ballgame seems to go on. What about misleading information which was covered by a Microsoft white paper?

    Bill Slawski: That was about concepts involving web credibility that Microsoft attempted to identify. It involved both on-site factors and off-site factors, and a third category, called aggregated information, which was the user behavior data they collected about pages. If you had on-site factors such as security certificates, logos, and certain other features, that would tend to make you look more credible. The emphasis is more on credibility than quality. It seems that the search engines are equating credibility with quality to a degree.

    Bill Slawski: The AIRWeb Conference, which was held five years in a row but not held last year, was held again this year. It covered adversarial information retrieval on the web in conjunction with another workshop on credibility. They called it the 2010 Web Quality Conference and it was shared by people from Google, Microsoft, Yahoo and a number of academic participants.

    Design actually plays a very important part, maybe bigger than most people would assume when it comes to people assessing whether or not this site is credible or not.

    You can go back a number of years to the Stanford persuasive technologies laboratory’s research and work on credibility. One of the findings stated, on a study of five thousand web sites or so, that design plays an important part, maybe bigger than most people would assume, when it comes to people assessing whether or not this site is credible or not.

    They also came out with a series of guidelines that said certain things that will make your web site appear more credible to people. It included photographs of people behind the site, explicitly showing an address, having privacy policy or ‘about us’ page, or terms of service. These are on-page signals you could look at.

    There are many off-page signals you could look at such as winning a Webby Award, being recognized in other places, being cited on authoritative type sites, or even page rank which they said they would consider as a signal to determine whether or not a page was a quality page. In the Microsoft paper they said they will look at page rank, which was interesting.

    Populating Useful Information Among Related Web Pages

    Eric Enge: Then you have the notion of brand searchers. If people are searching for your brand, that’s a clear signal. If you have a no-name web site and there are no searches for the web site name or the owner’s company name.

    Bill Slawski: That stirs up a whole different kettle of fish, and it leads to how do you determine whether or not a page is an authority page. For instance, Google decides, when somebody types ESPN into their search box on the toolbar, the ESPN web site should be the first one to come up. It doesn’t matter much what follows it. If they type Hilton but it goes into the topic of data the search engines identify as named entities, or specific people, and places ; how do they then associate those with particular query terms, and if those query terms are searched for how do they treat them?

    Do they look at it as a navigational query and ensure the site they associated with it comes up? Do they imply site search and show four, five, six, seven different results from that web site in the top ten which Google had been doing for a good amount of time?

    Eric Enge: Even for a non-brand search, for instance, Google surely associates Zappos with shoes. Right? So, in the presence of the authority, compared to some other new shoe site, you could reference the fact that the brand name Zappos is searched a bunch and that could be a direct authority signal for any search on the topic of shoes.

    Bill Slawski: Right. Let us discuss a different patent from Google that explores that and goes into it in more detail. There was one published in 2007 that I wrote about called “Populating useful information among related web pages.” It talks about how Google determines which web site might be associated with a particular query and might be identified as authoritative of it.

    In some ways, it echoes some of the things in the Microsoft paper about misinformation about authority. It not only looks at things it may see on the web, such as links to the pages using anchor text with those terms, but it may also look to see whether or not the term is a registered trademark that belongs to the company that owns a particular web site. It may also look at the domain name or yellow page entries.

    One of the authors of this patent also wrote a number of the local search patterns which, in some parts, say that citations are just as good as links. The mention of a particular business at a particular location will more likely rank higher if somebody does a search for businesses of that type in that location . So, this patent from Google expands beyond local search to find authoritative web pages for particular queries.

    Rejecting Annoying Documents

    Eric Enge: Excellent. Since we are getting towards the end I’d like your thoughts on annoying advertisements.

    Bill Slawski: Google came up with a patent a few years ago which, in some ways, seems a bit similar to Panda. It focused upon features on landing pages and the aspects of advertisements. It was called “Detecting and rejecting annoying documents”.

    It provided a list of the types of things they may look at in ads, on landing pages, the subject matter, characteristics rating, what type of language it uses, geographically where is it from, and who is the owner of the content.

    Eric Enge: It may even detect content in images using OCR or other kinds of analysis to understand what is in an image.

    Bill Slawski: Right, and also locate Flash associated with an ad, locate the audio that might be played, look at the quality of images, and the fact that they are animated or not. It was a big list. I do not know if we will see a patent anytime soon from Google that gives us the same type of list involving organic search and the Panda approach. Something might be published two, three or four years from now.

    Eric Enge: It’s interesting. Obviously, what patents they are using and not using is something you don’t get visibility to unless you are in the right particular building at the right time at the Googleplex.

    It seems to me the underlying lesson is that you need to be aware of search engines and, obviously, make search engine savvy web sites. The point is you need to focus on what people should have focused on all along which is: What do my users want? How do I give it to them? How do I engage them? How do I keep them interested? Then create a great user experience because that’s what they are trying to model.

    My perspective is search engines are another visitor to your web site like anybody else.

    Bill Slawski: Right. My perspective is that search engines are another visitor to your web site like anybody else. They may have different requirements. There may be some additional technical steps you have to take for your site to cater to them, but they are a visitor and they want what other visitors to your site want. They want to fulfill some type of informational or situational need. They want to find information they are looking for. They want to buy what you offer if, in the snippets that show up in search results, that’s what you do offer.

    If you are a web site that’s copying everybody else and not adding anything new or meaningful, not presenting it in a way that makes it easier to read and easier to find, and there is nothing that differentiates you or sets you apart, then you are not treating potential visitors the best way you can.

    When you do SEO, even in the age of Panda, you should be doing all the basics. It’s a re-ranking approach. You need to get rid of the same content with multiple different URLs, get rid of pages that are primarily keyword insertion pages where a phrase or two or three changes but the rest of everything stays the same.

    When you write about something, if you are paying attention to phrase-based indexing, make sure you include related information that most people would include on that page, related terms and so on. Those basics don’t go away and they may be more important now than they were in the past.

    Yes. As a searcher, as someone who helps people with web sites, and as someone who may present my own stuff on web sites, I want to know how it works. When I do a search, I want to make sure I am finding the things that are out on the web.

    Get some sweat equity going and make sure your stuff is stuff people want to see, learn about the search space as much as you can.

    Bill Slawski: The things I need, or want, or hope to see, and anything Google can do to make this better, I think everybody wins. That may be more work for people putting content on the web, but the cost of sweat is fairly cheap. Get some sweat equity going and make sure your stuff is stuff people want to see, learn about the search space as much as you can.

    As a ranking signal we have relevance, we have importance and, increasingly, we have content quality.

    Eric Enge: How is life for you otherwise?

    Bill Slawski: I have been trying to keep things local, get more involved in my local community, and do things with the local Chamber of Commerce. I now live in an area that’s much more rural in Northwestern Virginia and some of these local business people need the help.

    I am really close to DC and have been trying to work more with nonprofits. Instead of traveling, I am meeting many people locally, helping people learn more about what they can do with their web sites and that’s pretty fulfilling.

    Bill Slawski: I live in horse country now; there might actually be more horses in my county then there are people.

    Eric Enge: Thanks Bill!

    Originally published at Ramblings About SEO

  • SEO Reports in Google Analytics

    Google announced the launch of a limited pilot for SEO reports in Google Analytics, which are based on search queries data from Webmaster Tools.

    “Webmasters have long been asking for better integration between Google Webmaster Tools and Google Analytics,” Google says on the Webmaster Central Blog.

    The SEO reports also take advantage of Google Analytics’ filtering and visualization capabilities for deeper analysis, Google says. “For example, you can filter for queries that had more than 100 clicks and see a chart for how much each of those queries contributed to your overall clicks from top queries.”

    Google SEO Reports from Webmaster Tools data in Google Analytics

    Search queries data includes:

    • Queries: The total number of search queries that returned pages from your site results over the given period. (These numbers can be rounded, and may not be exact.)
    • Query: A list of the top search queries that returned pages from your site.
    • Impressions: The number of times pages from your site were viewed in search results, and the percentage increase/decrease in the daily average impressions compared to the previous period. (The number of days per period defaults to 30, but you can change it at any time.)
    • Clicks: The number of times your site’s listing was clicked in search results for a particular query, and the percentage increase/decrease in the average daily clicks compared to the previous period.
    • CTR (clickthrough rate): The percentage of impressions that resulted in a click to your site, and the increase/decrease in the daily average CTR compared to the previous period.
    • Avg. position: The average position of your site on the search results page for that query, and the change compared to the previous period. Green indicates that your site’s average position is improving.To calculate average position, we take into account the ranking of your site for a particular query (for example, if a query returns your site as the #1 and #2 result, then the average position would be 1.5).

    Webmasters can use the search queries data to review the query list for expected keywords and compare impressions and clickthrough rates. It can also be helpful for keyword ideas for paid search campaigns.

    “We hope this will be the first of many ways to surface Webmaster Tools data in Google Analytics to give you a more thorough picture of your site’s performance,” said Trevor Claiborne of the Google Analytics Team. “We’re looking forward to working with members of the pilot to help us identify the best ways to make this happen.”

    If you’re both a Webmaster Tools verified site owner and a Google Analytics admin, you can sign up for the pilot here. Each individual user must sign up for the pilot if they want access to the new reports.

  • Who You Are Becoming More Important in Google

    Who You Are Becoming More Important in Google

    Google announced today that it is now supporting authorship markup, which it will use in search results. The company says it is experimenting with using this data to help people find content from authors in search results, and will continue to look at ways it could help the search engine highlight authors and rank search results.

    This seems to indicate that Google will be placing even more emphasis on authority and/or personal connections with content. We have to wonder how this will affect content farms down the line.

    In the Webmaster Central Help Center, Google says, “When Google has information about who wrote a piece of content on the web, we may look at it as a signal to help us determine the relevance of that page to a user’s query. This is just one of many signals Google may use to determine a page’s relevance and ranking, though, and we’re constantly tweaking and improving our algorithm to improve overall search quality.”

    “We now support markup that enables websites to publicly link within their site from content to author pages,” explains software engineer Othar Hansson on Google’s Webmaster Central Blog. “For example, if an author at The New York Times has written dozens of articles, using this markup, the webmaster can connect these articles with a New York Times author page. An author page describes and identifies the author, and can include things like the author’s bio, photo, articles and other links.”

    “The markup uses existing standards such as HTML5 (rel=”author”) and XFN (rel=”me”) to enable search engines and other web services to identify works by the same author across the web,” continues Hansson. “If you’re already doing structured data markup using microdata from schema.org, we’ll interpret that authorship information as well.”

    Schema.org was revealed last week – an initiative on which Google, BIng, and Yahoo all teamed up together to support a common set of schemas for structured data markup on web pages. Schema.org provides tips and tools for helping sites appear in search results.

    How to Implement it

    To implement the authorship markup, Google says:

    To identify the author of an article or page, include a link to an author page on your domain and add rel=”author” to that link, like this:

    Written by <a rel=”author” href=”../authors/mattcutts”>Matt Cutts</a>.

    This tells search engines: “The linked person is an author of this linking page.” The rel=”author” link must point to an author page on the same site as the content page. For example, the page http://example.com/content/webmaster_tips could have a link to the author page at http://example.com/authors/mattcutts. Google uses a variety of algorithms to determine whether two URLs are part of the same site. For example, http://example.com/content, http://www.example.com/content, and http://news.example.com can all be considered as part of the same site, even though the hostnames are not identical.

    You can also link multiple profiles, as author pages can link to other web pages about the same author. You can tell Google that all of these profiles represent the same person by using a rel=”me” link to establish a link between the profile pages. More on this in the help center.

    Google’s rich snippets testing tool will also let you check your markup and make sure Google can extract the proper data.

    Google has already been working with a few publishers on the authorship markup, including The New York Times, The Washington Post, CNET, Entertainment Weekly, and The New Yorker. They’ve also added it themselves to everything hosted by YouTube and Blogger, so both of these platforms will automatically include the markup when you publish content.

  • Good SEO Starts with Smart Purchasing Decisions

    I don’t know about you, but sometimes I get completely overwhelmed with the sheer amount of time, energy and raw hours that go into properly marketing a website online. The thing that gets me the most is that with SEO and other forms of online marketing, there really is no situation when you can sit back and say “we’ve arrived.” Once you optimize a site, there are still so many things that can be assessed, analyzed, uncovered and corrected that you never really can say, “It’s Miller time!”

    This is what I envy about web designers. They get to produce a finished work, then go and collect awards for their work. But, online marketing – that’s a different ballgame all together. Sure, we can celebrate top rankings, but tomorrow there is another keyword that needs improvement!

    Making a Smart Purchasing Decision

    Ninety percent of the online marketing services my company provides are based on the amount of time we guesstimate the job will take to get results. There are a few expenditures the clients may have to buy into (directory submission fees, requested analytics tools, etc.), but most of the cost associated with SEO services comes down to determining how many hours are needed on a month-to-month basis.

    We look at time needed for researching, writing, analyzing, tweaking, optimizing, communicating, reporting and linking, just to name a few. Sometimes I think it’s difficult for clients to fully appreciate the time invested in doing a job properly, especially when they see “less expensive” options floating round. Sure, you can hire some kid down the street to mow your lawn, or you can hire the gardener to take care of your lawn, garden and flowerbeds and to get rid of unwanted rodents, weeds and other pests while making sure everything is properly fertilized and pruned each week. The time difference between the two is substantial.

    The problem comes, in SEO at least, when many people are expecting to hire the gardener at lawn mower kid wages. There is just no way the gardener can do their job effectively in the time it takes for the kid to mow the neighbors lawn across the street. Can’t happen.

    How Much Time Does a (Good) Job Take?

    When it comes to purchasing an SEO or SEM strategy for your online business, there are two things to consider: How many hours does it take to meet your expectations, and how much are you willing to pay for each hour that goes into meeting those expectations?

    Many SEOs charge a pre-determined package price. That just means they have pre-determined how many hours they will be providing you for their service. If you purchase an SEO package for $3000 per month, you can get anywhere from 30 hours ($100/hour) to 10 hours ($300/hour). The question you have to ask yourself is – can the $100/hour guy get the same results as the $300/hour team?

    If you can confidently say yes, then maybe that’s your guy. If not, maybe you need to consider the more “expensive” option. But we all know, cheap and ineffective usually turns out to cost a lot more than the expensive option that gets results!

    Ten hours per month on SEO or SEM doesn’t seem like much, but in the right hands, a lot can be accomplished. Here is a simple breakdown of what I would consider the average, high-quality SEO campaign:

    • Site Architecture and Site-Wide SEO: five to 10 hours needed at the onset to analyze the initial site architectural problems and create a concatenation schema to make all pages “search engine friendly.”
    • Keyword Research: initially, up to five hours to research the site’s core terms, determine which pages/keywords are a top priority for optimization and create an optimization plan moving forward. An additional 30-60 minutes of keyword research can go into each specific page being optimized.
    • On-Page Optimization: one to two hours per page to optimize keywords into the text, streamline the code (if necessary) and implement onto the site.
    • SEO Maintenance: two to four hours each month to review past optimization efforts and implement tweaks and changes designed to improve site performance. This also includes reviewing site usability and conversion issues.
    • Link Building/Social Media: five to six hours each month, at a minimum. New or competitive sites can, and often do, need much stronger link building or social media campaigns.
    • Analytics and Testing: three to five hours per month. No SEO campaign is complete without some way to analyze the overall performance of the optimization, usability and conversion improvement efforts that are being invested. The better the analysis, the more hours that must be invested.

    These numbers can fluctuate depending on the size of the site, but this is what we would consider a pretty basic campaign. If you’re looking for the best pricing option, how much from this do you feel you can cut before you’re cutting into your success?

    That’s the key question. If you’re looking solely at pricing and not factoring in the actual work, you’re bound to make a bad purchasing decision. The real question is, will the price you’re paying (or willing to pay) give you the ROI you need to make a profit? It’s probably not a good idea to purchase SEO until you can answer that question affirmatively.

    Originally published at E-Marketing Performance

  • Google +1 Button: 31 Things You Should Know

    As you may know, the Google +1 button has become available for webmasters, site owners and publishers to include on their content, and many rushed right in to do just that. Why not? It impacts your site’s visibility in search, and with the continuously changing Google algorithm, anything to help in that department is welcome to most sites.

    We’ve compiled a list of noteworthy tidbits about the button, and things that we think any site interested in using it should know.

    1. The +1 button will influence search rankings. Here is the exact quote from Google’s David Byttow, from when the feature was first announced: “We’ll also start to look at +1’s as one of the many signals we use to determine a page’s relevance and ranking, including social signals from other services. For +1’s, as with any new ranking signal, we’ll be starting carefully and learning how those signals affect search quality over time.”

    2. When a user searches, while signed in, their search result snippets may be annotated with the names of their connections who have “+1′d” the page. When none of the user’s connections have +1′d a page, the snippet may display the aggregate number of +1′s the page has received.

    3. Google says publishers could see “more, and better qualified traffic coming from Google” as potential visitors see recommendations from friends and contacts beneath their search results

    4. Google calls the +1 button “shorthand for ‘this is pretty cool’ or ‘you should check this out’.

    5. One a user clicks the button, a link to the content appears under the +1’s tab on the user’s Google Profile.

    6. Google suggests clicking the button when you “like, agree with, or want to recommend” something to others.

    7. The +1 Button is not the same as Google Buzz, though there are similarities. They both appear on your Google Profile under different tabs, but +1’s don’t allow for comments (at least yet. I would not be surprised to see Buzz’s functionality get rolled into +1 eventually).

    8. +1′s are public by default. Google may show them to any signed-in user who has a social connection to one. Users can choose not to have them displayed publicly on their Google Profile, however.

    9. There are different sizes and styles of the button that you can use on your site.

    10. The button is even more customizable if you want to get more technical. The API documentation can be found here: http://code.google.com/apis/+1button/

    11. When a user clicks on the +1 button it applies to the URL of the page they’re on.

    12. Still, multiple buttons can be placed on a single page that all +1 different URLs (refer to the above documentation).

    13. While Google suggests you use the button where you think they’ll be most effective in terms of placement around your content, the company recommends above the fold, near the title of the page, and close to sharing links. Google also says it can be effective if you put it at the end of an article as well as the beginning.

    14. By placing the <script> tag at the bottom of the document, just before the body close tag, Google says you may improve loading speed of the page (which is another factor Google takes into account in terms of ranking).

    15. If you try to +1 a private URL, it won’t work, according to Google.

    16. You have to be logged into a Google account for the button to work.

    17. While everyone can see aggregate annotations, signed in users can also see personalized annotations from people in their Gmail/Google Talk Chat list, My Contacts group in Google Contacts, and people they’re following in Google Reader and Google Buzz.

    18. Google points to these canonicalization strategies to ensure the +1s “apply as often as possible to the pages appearing in Google search results.” http://www.google.com/support/webmasters/bin/answer.py?answer=139066

    19. The button is supported in 44 languages (though the annotations only appear in the English language Google.com search results for the time being).

    20. The button will be seen in the Android Market, Blogger blogs, Product Search, and YouTube, in addition to any other sites that add them.

    21. A lot of sites have already replaced the Google Buzz button on content pages with the +1 button

    22. If you have a Blogger blog, you can add the button by going to Design > Page Elements on the dashboard, finding the “Blog posts” area, clicking edit, and selecting the “Show Share Buttons” options, where you should find the +1 button as an option.

    23. The +1 Button will be available on YouTube watch pages under the “share” feature. Consider how valuable YouTube can already be to SEO, and take then take into consideration the search implications of the +1 button.

    24. If you’re signed into your Google account, Google will show you +1 annotations from your Google contacts on YouTube search results.

    25. Google says adding +1 buttons to your pages can help your ads stand out on Google. “By giving your visitors more chances to +1 your pages, your search ads and organic results might appear with +1 annotations more often. This could lead to more–and better qualified–traffic to your site,” the company says.

    26. The +1 button will appear next to the headline on search ads. Personalized annotations will appear beneath the Display URL.

    27. Publishers can get updates about the button by joining this group.

    28. Google may crawl or re-crawl pages with the button, and store the page title and other content, in response to a +1 button impression or click.

    29. Google has strict policies for publishers that it says it will use (along with the Google ToS) to govern use of the +1 button. Here are these policies in their entirety:

    Publishers may not sell or transmit to others any data about a user related to the user’s use of the +1 Button. For the avoidance of doubt, this prohibition includes, but is not limited to, any use of pixels, cookies, or other methods to recognize users’ clicks on the +1 Button, the data of which is then disclosed, sold, or otherwise shared with other parties.

    Publishers may not attempt to discover the identity of a +1 Button user unless the user consents to share his or her identity with the Publisher via a Google-approved authorization procedure. This prohibition includes identifying users by correlating +1 Button reporting data from Google with Publisher data.
    Publishers may not alter or obfuscate the +1 Button, and Publishers may not associate the +1 Button with advertising content, such as putting the +1 Button on or adjacent to an ad, unless authorized to do so by Google.

    Publishers may not direct users to click the +1 Button for purposes of misleading users. Publishers should not promote prizes, monies, or monetary equivalents in exchange for +1 Button clicks. For the avoidance of doubt, Publishers can direct users to the +1 Button to enable content and functionality for users and their social connections. When Publishers direct users to the +1 Button, the +1 action must be related to the Publishers’ content and the content or functionality must be available for both the visitor and their social connections.

    Google may analyze Publishers’ use of the +1 Button, including to ensure Publishers’ compliance with these policies and to facilitate Google’s development of the +1 Button. By using the +1 Button, Publishers give Google permission to utilize an automated software program (often called a “web crawler”) to retrieve and analyze websites associated with the +1 Button.

    30. The button is not available on mobile search results yet, though users may still be able to see the buttons on your pages.

    31. According to Search Engine Land, while they may still be a while away, Google will launch analytics for the button, to show webmasters info on geography, demographics, content, and search impact. Apparently Google is working with launch partners to make sure reporting is accurate before they offer it on a wider scale.

    If you want the code for the button to add to your site, you can get it here.

    There are more discussions (including issues people are having with the +1 button in Google’s Webmaster Central Help Forum.

  • Google Acquires PostRank, a Social Measurement Firm

    Google Acquires PostRank, a Social Measurement Firm

    Google has acquired PostRank, a service that helps users find, read, and measure online social engagement with news and articles. The amount has not been disclosed.

    The company offers data services (APIs that enable real-time content and engagement for media monitoring,analysis, and aggregation), analytics, consumer-oriented tools, an da service called Connect, which connects brands and agencies with influential online publishers.

    Big news! The PostRanch has become part of the GooglePlex! http://bit.ly/bolp 24 minutes ago via TweetDeck · powered by @socialditto

    An announcement on the PostRank blog says:

    From the seed of an idea in late 2006, to the launch of PostRank in mid 2007, and an incredible four years of continuous learning, iteration, and developing and launching new products—what an amazing experience it has been. And yet, the best is still yet to come.Today, we are excited to announce that PostRank has been acquired by Google!

    We know that making sense of social engagement data is important for online businesses, which is why we have worked hard to monitor where and when content generates meaningful interactions across the web. Indeed, conversations online are an important signal for advertisers, publishers, developers and consumers—but today’s tools only skim the surface of what we think is possible.

    We are extremely excited to join Google. We believe there is simply no better company on the web today that both understands the value of the engagement data we have been focusing on, and has the platform and reach to bring its benefits to the untold millions of daily, active Internet users. Stay tuned, we’ll be sure to share details on our progress in the coming months!

    Of course, we wouldn’t be where we are today if it wasn’t for all the help, feedback, and support we’ve received from our community over the past four years—thank you all, you know who you are, and we truly couldn’t have done it without you!

    PostRank says, “conversations online are an important signal for advertisers, publishers, developers and consumers.” Obviously, they’re an important signal for search as well, and while we don’t know exactly what Google’s plans are for PostRank, it’s clear that Google is putting a lot more stock into social search of late. Google did give TechCrunch the following statement:

    “We’re always looking for new ways to measure and analyze data, and as social analytics become increasingly important for online businesses, we’re excited to work with the PostRank team to make this data more actionable and accountable. They have developed an innovative approach to measuring web engagement, and we think they can help us improve our products for our users and advertisers.”

    The company also notes that its team will be moving to Google’s Mountain View offices.

  • Google “Not Betting the Farm” on Self-Driving Cars

    Google “Not Betting the Farm” on Self-Driving Cars

    Google co-founder and CEO Larry Page spoke to investors, and aimed to set their minds at ease with regards to the company’s spending habits.

    The company’s stock has not been doing so great since Page took over as CEO, with investors worrying that Google is spending too much on things that might not pay off. According to Mercury News, Google’s stock has dropped nearly as much as $100 per share since the transition in leadership.

    Page highlighted the company’s success with Android, Chrome, and Display Advertising. He reportedly put up a picture of Google’s famous self-driving cars, saying shareholders shouldn’t read press reports about things like that and assume a large amount of the company’s resources are being poured into them.

    He’s quoted as saying, “It’s much more interesting [for the media and people outside of the company] — what is the latest crazy thing that Google did. It tends to be like three people in the company, keep that in mind. We are not betting the farm on a lot of those things. That’s not what we are doing.”

    Page stressed that the company is still focusing on search and advertising, Google’s real breadwinners. “We don’t want to choke innovation,” he added. “We want to make sure we have a lot of things going on at the company that are maybe speculative…we spend the vast majority of our resources on our core businesses, which are search and advertising. … That’s our core focus.”

    In October, Google announced:

    So we have developed technology for cars that can drive themselves. Our automated cars, manned by trained operators, just drove from our Mountain View campus to our Santa Monica office and on to Hollywood Boulevard. They’ve driven down Lombard Street, crossed the Golden Gate bridge, navigated the Pacific Coast Highway, and even made it all the way around Lake Tahoe. All in all, our self-driving cars have logged over 140,000 miles. We think this is a first in robotics research.

    Our automated cars use video cameras, radar sensors and a laser range finder to “see” other traffic, as well as detailed maps (which we collect using manually driven vehicles) to navigate the road ahead. This is all made possible by Google’s data centers, which can process the enormous amounts of information gathered by our cars when mapping their terrain.

    To develop this technology, we gathered some of the very best engineers from the DARPA Challenges, a series of autonomous vehicle races organized by the U.S. Government. Chris Urmson was the technical team leader of the CMU team that won the 2007 Urban Challenge. Mike Montemerlo was the software lead for the Stanford team that won the 2005 Grand Challenge. Also on the team is Anthony Levandowski, who built the world’s first autonomous motorcycle that participated in a DARPA Grand Challenge, and who also built a modified Prius that delivered pizza without a person inside. The work of these and other engineers on the team is on display in the National Museum of American History.

    This followed a lot of media criticism about Google’s continued ability to innovate, and numerous reports of top engineers choosing Facebook as an employer over Google.

    As recently as last month, Google was found to be “quietly lobbying” for proposed legislation in Nevada that would legalize self-driving cars on public roads. According to the New York Times, Google had hired a Las Vegas-based lobbyist to promote the legislation, which would allow the licensing and operation of the cars while also allowing texting behind the wheel of a self-driving car. 

    Self-driving cars aside, Google has recently unveiled some other ambitious endeavors, not the least of which being Google Wallet – the company’s vision for mobile payments. At Google I/O, the company’s developer conference held last month, the company discussed Android @ Home and the Open Accessory Projects, which would see everyday appliances getting integrated with the company’s mobile operating system. Google also unveiled Google Music, and the new Chromebooks, based on its innovative operating system strategy – Chrome OS.

    This week, Google officially launched Google Offers, which is a little more in line with the more traditional money makers like search and advertising. Then of course there’s the +1 button, which is even more directly tied to search and advertising.

  • Google, Bing, and Yahoo Work Together on Search

    Google, Bing, and Yahoo Work Together on Search

    Bing, Google and Yahoo have teamed up to announce schema.org, an initiative to support a common set of schemas for structured data markup on web pages.

    A representative for Bing tells WebProNews, “Over the past two years, Bing has worked to improve the search experience to better reflect both the evolving Web and changing consumer habits.”

    ” While this effort has a major ‘geek factor,’ it serves as quite a significant advancement for both the search industry and consumers,” he said.

    The site will provide tips and tools for helping sites appear in search results. “It will also help search engines better understand websites, and moving forward, Bing will work jointly with the larger web community and its search partners to extend the available schema categories,” the representative says. “Consumers will also benefit from this effort by experiencing richer search experiences and content from a much broader set of publishers.”

    “At Google, we’ve supported structured markup for a couple years now. We introduced rich snippets in 2009 to better represent search results describing people or containing reviews. We’ve since expanded to new kinds of rich snippets, including products, events, recipes, and more,” says Google’s search quality team. “Adoption by the webmaster community has grown rapidly, and today we’re able to show rich snippets in search results more than ten times as often as when we started two years ago.”

    “We want to continue making the open web richer and more useful. We know that it takes time and effort to add this markup to your pages, and adding markup is much harder if every search engine asks for data in a different way,” the team adds. “That’s why we’ve come together with other search engines to support a common set of schemas, just as we came together to support a common standard for Sitemaps in 2006. With schema.org, site owners can improve how their sites appear in search results not only on Google, but on Bing, Yahoo! and potentially other search engines as well in the future.”

    The search engines also worked together to support the canonical tag.

    Here’s what the schema.org site itself says:

    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages.

    Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure.

    A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.

    Google says it has added over 100 new types and ported all existing types of rich snippets. Where in the past it has supported three different standards for structured data markup, they will no only focus on microdata. Google says it will continue to support existing rich snippet markup formats. They also provide a testing tool for markup here.

    Bing also says that while it accepts a wide variety of markup formats, it is working to simplify the choices for webmasters.

  • Google Simplifies Site Search Ownership Transfers

    Google Simplifies Site Search Ownership Transfers

    Google has made it easier for Google Site Search users to transfer ownership to different users, after numerous requests to do so. Software engineer Yong Zhu explains on the Google Custom Search blog:

    For instance, you may have created, developed and customized your website’s search experience under your Google account, but want the long-term management of the search engine to be performed by someone else. In the past, you would have been required to manually export the configuration, import it into a new Site Search instance and then cancel the old instance.

    Now, you can very easily transfer the ownership of a Site Search engine to a new user by simply specifying a different Google account email address in the Business Settings tab in the control panel.

    Transfer ownership of site search

    After a user provides their account info for the new admin, a new site search engine is actually created, using the same configuration as the current site search engine. Additionally, the new engine will be officially owned and able to be administered by the new owner.

    Google says any unused query quota is transferred to the new site search engine, which will show the transfer history in the business settings in the control panel. Users can still continue to use the old engine, but ads may be displayed alongside search results, and XML access is disabled.

    Google also makes a couple of recommendations for new owners after the transfer takes place: review and update admin details in the business settings (license renewal notifications are sent to the email address provided), and edit the search box code on your content pages where Google Site Search is integrated, in order to replace the old search engine ID with the new one.

  • Is Google’s +1 Button Killing Google Buzz Already?

    Google Buzz has never lived up to Google’s hopes for the service. That seems pretty clear now. Much like the ill-fated Google Wave, it launched to much hype (not to mention privacy concerns), but for the most part fizzled out in terms of user interest.

    Even Buzz users (I count myself among them, to some extent) aren’t exactly radiating with enthusiasm for the service, and quite frankly, Google has done little in the way of enhancing the service since its launched.

    Now here comes the Google +1 Button, Google’s latest social sharing button, which has a direct impact on search rankings. Clearly webmasters and publishers have greater incentive to use this than they do Google Buzz, especially considering the ongoing challenges of SEO and ever-changing Google algorithm.

    Today, Google launched the +1 button for websites, and sites all over the place are already ditching the Buzz button on their content in favor of the +1 button. Prior to the release, we wondered if site owners would find room for both, and it appears that many are not.

    Plenty of industry publications have already made their decisions. TechCrunch, Mashable, and Search Engine Land, ReadWriteWeb, and we here at WebProNews have already switched out Buzz for +1 on article pages. I’m sure the list of sites goes on and on.

    Google did address this somewhat when the +1 button was announced. “Buzz button[s] are used for starting conversations about interesting web content (‘Hey guys, what do you think about this news story?’),” the company said. “+1 buttons recommend web content to people in the context of search results (‘Peng +1′d this page’), and +1′s from social connections can help improve the relevance of the results you see in Google Search. Soon, you’ll be able to use the +1 button, or the Buzz button, or both—pick what’s right for your content.”

    Apparently site owners are not too concerned about Buzz conversations around their content. I’d wager that Google will address this issue again in the near future.

    Google Buzz still sits in the Gmail inbox. How often it actually gets checked, and by how many people is anybody’s guess.

    It’s entirely possible that Google just effectively put a big nail in the coffin of its last major social attempt. Perhaps not the final nail, but I don’t see any indication that Buzz is gaining any ground. I suppose one positive aspect of this for Buzz is that the +1 shares appear on the Google Profile in a tab next to a Google Buzz tab. Assuming that people actually visit these profiles to look at +1 shares (and that’s a big assumption), they could be reminded of Buzz and see what these same people are “buzzing” about. Chances are that in most cases, they’re simply syndicating updates from other networks like Twitter.

    A privacy group did just win $500,000 from a Google Buzz settlement.

  • Bing Giving Away $100 Bucks to 5 Twitter Followers

    Bing Giving Away $100 Bucks to 5 Twitter Followers

    Five people who follow Bing on Twitter have the chance to win a hundred bucks a piece, courtesy of a sweepstakes the search engine is running today only. It’s called the “Friends Don’t Let Friends Decide Alone” Sweepstakes, and appears to be geared at drawing attention to Bing’s recently launched social features.

    Given that these feature were largely based upon Facebook integration, it seems a bit strange that the contest is Twitter-based, but whatever.

    Bing is asking via Twitter, “What decision can you not make without your friends?” Today, until midnight tonight, followers can simply reply to the question with the hashtag #decisions, and Bing will randomly select 5 winners to win “$100 towards a night out with friends”.

    We’re giving out $100 to 5 lucky followers. Stay tuned! Rules: http://binged.it/mrJZGH #decisions ^bb 1 hour ago via CoTweet · powered by @socialditto

    What decision can you not make without your friends? Answer for a chance to win $100! Rules: http://binged.it/ih283u #decisions ^bb 1 hour ago via CoTweet · powered by @socialditto

    If you haven’t tried out our new social features yet, learn how your friends are fueling faster decisions: http://binged.it/jcELk2 38 minutes ago via CoTweet · powered by @socialditto

    The money will come in the form of a Visa gift card. To be eligible, you have to be a legal resident of the the 50 U.S. states and District of Columbia, and be 18 or older. Microsoft employees, relatives of employees, and employees of subsidiaries are all ineligible.

    I’m guessing Microsoft is using the contest to tap into the social community to spark some ideas for how to improve upon its social search integrations. It’s become quite clear that search in general is trending much more in this direction, and with Google upping the ante with its +1 button, Bing is uniquely positioned to get more out of Facebook – the social network that most already use.

    The official rules of the contest are here.

  • Adding the Google +1 Button to Your Site

    Adding the Google +1 Button to Your Site

    This week, it was leaked that the Google +1 Button for websites would be made available today. While Google still seems to be sitting on the official announcement for the moment, we did run across a post about it on the company’s Inside AdWords blog in our feeds, which appears to have been removed (presumably it was accidentally posted too early).

    Here’s what Google has to say on using the button:

    Adding +1 buttons to your pages is a great way to help your ads stand out on Google. By giving your visitors more chances to +1 your pages, your search ads and organic results might appear with +1 annotations more often. This could lead to more–and better qualified–traffic to your site.

    To get started, visit the +1 button tool on Google Webmaster Central, where you’ll configure small snippets of JavaScript to add to the pages where you want +1 buttons to appear. You can pick from a few different button sizes and styles, so choose the +1 button that best matches your site’s layout.

    When a user clicks +1, that +1 applies to the URL of the page they’re on. There are some easy ways to ensure your +1s apply as often as possible to the pages appearing in Google search results.

    There is a Google Publisher Button Announce Group, which webmasters can subscribe to to stay up to date on the latest with the button.

    The button is supported in 44 languages, but Google notes that +1 annotations only appear in English search results on Google.com for now.

    Google named Best Buy, Add This, Mashable, The Huffington Post, Reuters, Bloomberg, and a few others as launch partners for the button. It will also appear on content in the Android Market, YouTube, Blogger, and Google Product Search. The company says you’ll start seeing the button on these properties in the coming days.

    Update: Now Blogger has a post up about adding the button to your Blogger Blog. Software engineer Marcos Almeida writes:

    To add the +1 button to your blog, you’ll need to enable Share buttons on Blogger. To do this, go to Design > Page Elements on your Blogger dashboard, find the Blog posts area, click on Edit, and select the “Show Share Buttons” option. If you are already using Share buttons, the +1 button will automatically show up as a new share option.

    Update 2: Google has now made the post live on the Inside AdWords blog, and cross-posted it on the Official Google Blog, so you can read it in its entirety.

  • Google and Apple Renew Search Deal

    Google and Apple Renew Search Deal

    Former Google CEO and current Google executive chairman Eric Schmidt spoke at the All Things Digital D9 conference, and revealed that Google has renewed its search deal with Apple. Google will remain the default search engine on iPhones and iPads.

    There have recently been some questions about whether Apple would stick with Google Maps in the upcoming iOS 5, or abandon it in favor of a technology of its own, but that appears not to be the case. Google Maps will reportedly continue to get featured placement on iOS devices.

    Danny Sullivan, who liveblogged Schmidt’s comments at D9, quotes him as saying, “We have a very, very good partnership,” and that both the search and maps deals have been renewed. He didn’t reveal any more details about the deals.

    The search deal especially, could be critical for Google in keeping Bing from gaining a significant amount of search market share (or at least an even more significant amount than it’s already poised to gain). Bing should be getting some nice boosts from the Windows Phone platform, as well as deals with Nokia and RIM (and watch out if Microsoft ever decides to put a proper browser with Bing as the default search engine on Xbox).

    While Android has been doing incredibly well, Google remaining on the very popular iOS devices matters. Given that Apple is about to introduce the next generation of iOS itself, one can only imagine that a new wave of consumer interest is on the horizon.

    The maps deal matters, as it is simply an extension of search, and obviously an important one on mobile devices. Bing just launched some new streetside view technology for its own Maps service.

    While Google (and Schmidt specifically) has repeatedly referred to Microsoft as its main competitor, Schmidt spoke about what he calls the “gang of four” companies that are dominating in consumer tech. This includes Google, Apple, Amazon, and Facebook – no Microsoft. When asked by interviewers Kara Swisher and Walt Mossberg about Xbox, Schmidt downplayed the wildly popular gaming console, saying that “it’s not a platform at the computational level,” and that Microsoft is still fundamentally about Office and Windows (as quoted by All Things Digital’s Peter Kafka).

    Really? Microsoft’s latest earnings report tells a different story:

    It’s also clear that Xbox is becoming less and less about just gaming. According to Microsoft, 40% of all activity on the Xbox 360 is non-gaming activity, meaning that users are spending nearly half the time on the console watching streaming video. Users can stream movies and music through Zune, which might be getting a rebrand. Again, look out if they simply throw a browser into the mix. I wonder if Microsoft has anything like that.

    It’s very interesting that Schmidt would downplay the significance of the Xbox, given the less-than-stellar launch of Google’s entry into the living room – Google TV.