WebProNews

Tag: algorithms

  • Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup

    Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup

    Data science is one of the fastest growing segments of the tech industry, and Alteryx, Inc. is front and center in the data revolution. The Alteryx Platform provides a collaborative, governed platform to quickly and efficiently search, analyze and use pertinent data.

    To continue accelerating innovation, Alteryx announced it has purchased a startup with roots in the Massachusetts Institute of Technology (MIT). Feature Labs “automates feature engineering for machine learning and artificial intelligence (AI) applications.”

    Combining the two companies’ platforms and engineering will result in faster time-to-insight and time-to-value for data scientists and analysts. Feature Labs’ algorithms are designed to “optimize the manual, time-consuming and error-prone process required to build machine learning models.”

    Feature Labs makes its open-source libraries available to data scientists around the world. In what is no doubt welcome news, Alteryx has already committed to continued support of the open-source community.

    From the Press Release:

    “Feature Labs’ vision to help both data scientists and business analysts easily gain insight and understand the factors driving their business matches the Alteryx DNA. Together, we are helping customers address the skills gap by putting more powerful advanced analytic capabilities directly into the hands of those responsible for making faster decisions and accelerating results. We are excited to welcome the Feature Labs team and to add an engineering hub in Boston,” said Dean Stoecker, co-founder and CEO of Alteryx.

    “Alteryx maintains its leadership in the market by continuing to evolve its best-in-class, code-free and code-friendly platform to anticipate and meet the demands of the 54 million data workers worldwide2. With the addition of our unique capabilities, we expect to empower more businesses to build machine learning algorithms faster and operationalize data science,” said Max Kanter, co-founder and CEO of Feature Labs. “Feature engineering is often a time-consuming and manual process and we help companies automate this process and deploy impactful machine learning models.”

  • 5 SEO Trends Digital Marketers Should Not Ignore in 2017

    5 SEO Trends Digital Marketers Should Not Ignore in 2017

    Anyone who has worked in the SEO field for a while would surely know that there is no fixed rule in the game. To consistently outperform your rival, it is necessary to master the trends as they come or be swept away into oblivion.

    For 2017, here are the top 5 trends in SEO that will give your brand more visibility online:

    Smarter AIs Could Change Algorithms

    One of the major factors that could affect SEO in 2017 is, of course, the latest advancements in artificial intelligence technology. Everyone should expect the way search engines work to change as smarter AIs join the game.

    Google users should expect changes on how the popular search engine does the work for them. In late 2016, Google RankBrain was unleashed, paving the way for the search engine to learn how people use the facility.

    The latest Hummingbird extension boasts of an algorithmic machine learning technology with the end goal of improving the search experience for users. According to Forbes, RankBrain enabled Google to learn how people use phrases in their queries and, with the information, update the search engine’s algorithm accordingly. Of course, this means that content providers must relearn things if necessary and adapt to the changing search landscape. The previous update left many webmasters grumbling when they found out that their articles hardly make it to the coveted “Top Stories” section anymore.

    The rising popularity of digital assistants, such as Siri and Cortana, is also changing the way people make searches online. This means that with the increasing use of these intelligent digital assistants, advanced forms of conversational queries will increase, opening up another segment that companies could target.

    AMP Gets Amped

    While desktop computing won’t exactly disappear, search engine use is projected to see the most growth in the mobile segment. The Accelerated Mobile Pages (AMP) project protocol anticipates this trend and is in place to make content optimized for mobile browsing.

    Pages running on AMP get loaded on mobiles devices four times faster than regular ones. In fact, Google favors AMP content. Since last February, Google has been marking AMP sites with a lightning bolt icon and featuring them more prominently in search results.

    Going AMP would also benefit users in the long run. Pages load faster because it uses 8 times less data compared to a regular page. And of course, everyone knows that loading speed is a big factor in viewer retention.

    Branding Goes Personal

    Some industry watchers predict that personal branding is the way to go to be successful with your online campaign. Of course, that is not saying that you should do away with the corporate brand, but there are advantages when people within an organization tell their own stories. Think of personal branding as a way to complement a company’s SEO efforts and how it reaches out to its online customers.

    Nowadays, corporations have to deal with being perceived by consumers as manipulative and greedy. Therefore, engaging consumers on a personal level is seen as the solution to diffuse this consumer wariness. By providing a personal identity that corporations naturally lack, personal branding makes it easier for consumers to trust the brand.

    In addition, posting on a personal level amplifies the reach of a company. For instance, if a CEO of a company has three personal accounts on social media for this purpose, he is multiplying his corporate exposure as all of these accounts can grow their own follower base independently. In addition, these separate accounts can be used to target different segments of the market, which could result in a more customized posting that could potentially increase engagement.

    UEO Meets SEO

    Another important trend to watch out for is the rising importance of UEO in SEO. In fact, there are indications that user experience optimization (UEO) is going to become more important in SEO rankings.

    Google is now giving hints that it may give more weight to user experience in its search result. One such hint is that the search engine giant seems to favor pages that load quickly with its preference for AMP content.

    If the trend continues, the next step would be for Google to favor pages that offer a more enjoyable user experience. One metric that could come into play is the length of time a visitor stays on a page– staying a long time usually means that the visitor enjoys the content. While user experience has been an important metric in ranking pages for some time now, it looks like it’s going to become even more important in future versions of the search algorithm. The bottom line is that webmasters should post quality content in well-designed sites that most people will enjoy.

    Content Gets Denser

    Speaking of content, there is another trend that experts are predicting– the rise of denser content. According to Smart Insights, there was a time when tons of brief but “fluffy” content-wise posts sufficed, which was eventually replaced by lengthy, seemingly complicated content to rank in SEO. However, those two extremes are now being replaced by what is referred to as Dense Content.

    Simply put, Dense Content is when one offers tons of information using the smallest space possible. Of course, this presents an entirely new challenge which would definitely involve some spark of creativity and the flair for creating stunning visuals. But of course, the challenge is what makes SEO very interesting.

    [Featured Image by Pixabay]

  • Google Could Easily Rig an Election with Search Results, Says Study

    Google Could Easily Rig an Election with Search Results, Says Study

    Search results wield the power to color one’s view of any person, place, or thing. This is a given. And being the far-and-away biggest search engine in the world, Google wields most of that power. Of course, in order to sleep at night, we all have to assume that Google will, ultimately, restrain from using that power to nefarious ends. At least not too nefarious.

    Though it should be obvious that Google plays a huge role in most Americans’ perceptions, it’s certainly unnerving to think about the search giant swaying an election.

    But that’s exactly what Google has the power to do, according to researchers.

    Psychologist Robert Epstein says, unequivocally, that your next president could ascend to the oval office with the help of some Google search algorithm tweaks. And he has some data to prove it.

    Epstein set up a very basic experiment. Take a bunch of undecided voters, give them the choice between two candidates, set them loose to search said candidates for 15 minutes on a Google-like search engine, and see if it sways their opinions.

    And boy did it ever.

    From Epstein’s write-up at Politico:

    In our basic experiment, participants were randomly assigned to one of three groups in which search rankings favored either Candidate A, Candidate B or neither candidate. Participants were given brief descriptions of each candidate and then asked how much they liked and trusted each candidate and whom they would vote for. Then they were allowed up to 15 minutes to conduct online research on the candidates using a Google-like search engine we created called Kadoodle.

     

    Each group had access to the same 30 search results—all real search results linking to real web pages from a past election. Only the ordering of the results differed in the three groups. People could click freely on any result or shift between any of five different results pages, just as one can on Google’s search engine.

     

    When our participants were done searching, we asked them those questions again, and, voilà: On all measures, opinions shifted in the direction of the candidate who was favored in the rankings. Trust, liking and voting preferences all shifted predictably.

    How much of a shift? Epstein says favorability ratings for the candidates jumped anywhere from 37 to 63 percent which, given elections are often decided by small margins, is a pretty big deal.

    It’s not far-fetched when you think about it. if you searched for a candidate, and you mostly see negative headlines pop up on the first page of search results, it’s reasonable to think your opinion of said candidate may suffer. Flip that to positive results, and you could understand how Epstein thinks Google could easily promote certain candidates.

    Of course, one would have to believe that Google would want to influence an election. I mean, who knows? It’s not as if Google is a massive corporation with a multitude of vested interests.

    Would that be evil of them?

  • Facebook Might Give Users More Control over News Feed

    Facebook is still testing ways to let users prioritize particular friends and pages in their news feed – something that could be good news for all the pages out there that continue to suffer from reach issues.

    TechCrunch spotted a new test feature called ‘See First’ which allows users to designate certain people and pages to “see first” on top of their news feed.

    Users would be able to select one of three news feed visibility options for friends and pages – unfollow, default, or See First. If See First is selected, that person or page’s new content will always appear at the top of your news feed.

    Facebook gave this statement:

    “We are always exploring new ways to improve the Facebook experience, and are currently running a small test of a feature that lets you indicate that you’d like to see posts from a specific person or Page at the top of your News Feed.”

    This new test is just a tweak on another test Facebook ran back in April. A few months ago, the company prompted users atop their news feeds to “pick friends and pages and see their posts at the top of News Feed.” Facebook advertised it as a way to “see more of what you love.”

    Of course, we talked about how this could be very, very good for pages:

    Facebook can’t show you everything from every friend and page you follow. Anyone with a page knows how Facebook’s organic reach has plummeted over the past year or so. Facebook says that it does not filter posts from friends, however, and all you have to do to see every single thing every single friend posts is to scroll down far enough.

     

    Of course, that’s not really feasible. Facebook’s algorithms, which weigh the importance of posts on a variety of factors, should take into account how close you are to said person (through interactions) when sorting your News Feed.

     

    But this would be one surefire way to tell Facebook that you never, under any circumstances, want to miss a post by a specific person or page.

     

    Facebook already does something like this for friends. You can still add friends to a “close friends” list, “to see more of them in your News Feed and get notified each time they post.” The notifications are optional.

     

    But this could be really good news for pages, who continue to suffer with visibility. Recently, Facebook announced a tweak to News Feed that would show users more content from friends, and even less from pages.

     

    But if Facebook allows people to designate pages whose content they under no circumstances want to miss, it could help those pages, at least in theory, get more reach.

    Facebook appears to be testing multiple ways to give user more control over their news feed. If Facebook does indeed roll this feature out wide, pages will need to make sure their content is awesome enough that people will want to designate it at “See First”.

  • Google Maps’ Racist Results To Get the Googlebomb Fix

    This post contains language some may find offensive.

    Google says it screwed up and is sorry for pointing people toward the White House, historically black colleges, and other locations when shockingly racist search terms were entered into Google Maps.

    “We were deeply upset by this issue, and we are fixing it now. We apologize this has taken some time to resolve, and want to share more about what we are doing to correct the problem,” says Google.

    “At Google, we work hard to bring people the information they are looking for, including information about the physical world through Google Maps. Our ranking systems are designed to return results that match a person’s query. For Maps, this means using content about businesses and other public places from across the web. But this week, we heard about a failure in our system—loud and clear. Certain offensive search terms were triggering unexpected maps results, typically because people had used the offensive term in online discussions of the place. This surfaced inappropriate results that users likely weren’t looking for.”

    (GIF via the Washington Post)

    Long story short, Google’s practice of pulling in data from all across the web and using it to categorize and label places inside Maps kind of bit them in the ass.

    Danny Sullivan at Search Engine Land has a nice description of how Maps searches for ‘nigger house’, ‘nigger king’ wound up directing people toward the Presidential residence:

    To understand more, say Google knows about a local sporting goods store. The owner of that store might explain in the description it provides to Google Maps that it sells baseball, football and hockey equipment. It also sells other sporting equipment, but if these things aren’t also listed in its description or on its associated web site, the store might not be deemed relevant for those things.

    With the Pigeon Update, Google sought to correct this. Imagine that some customer of the site wrote a blog post saying that the store was a great place to get skiing equipment. Google, seeing the business named in that post, might effectively add this information to the business listing, making it relevant for skiing equipment. To our understanding, there doesn’t even have to be a link to the business site or listing in Google Maps. Using a business name alone might be enough to create the connection.

    Ok, so how is Google planning on fixing this?

    The same way they fixed another issue many years ago, apparently.

    “Our team has been working hard to fix this issue. Building upon a key algorithmic change we developed for Google Search, we’ve started to update our ranking system to address the majority of these searches—this will gradually roll out globally and we’ll continue to refine our systems over time. Simply put, you shouldn’t see these kinds of results in Google Maps, and we’re taking steps to make sure you don’t,” says Google.

    That “key algorithmic change” involved tackling Googlebombs – the practice many pranksters used to make sites rank for certain, often derogatory searches by linking to the site behind “obscure or meaningless queries.” If enough people link to George W. Bush’s page using the phrase “miserable failure”, then his page will pop up at the top of searches for the term “miserable failure”. That actually happened by the way.

    But Google made some algorithmic changes so that it doesn’t really work like that anymore. That’s surely what they hope to do with this whole Maps fiasco. Google is likely tweaking its algorithms to make sure the words and phrases associated with map listings actually appear on the location’s description and official pages.

    In other, depressing news, it’s 2015 and there are still a bunch of people calling The White House ‘nigger house’ online.

    Image via Cezary p, Wikimedia Commons

  • Twitter’s New ‘While You Were Away’ Feature Is a Smart Move to Keep People Interested

    It’s easy to get lost in the Twitterverse.

    I’m not just talking about the the nearly 300 million-member community as a whole, but also your own Twitterverse – comprised of the people you follow. It could be 100 people, it could be 1,000. Unlike Facebook, which doesn’t show you all the posts coming in from all of your contacts, Twitter has always been a unmitigated content delivery system. For better or for worse, your Twitter stream contains it all – every post from everyone you follow.

    And unless you’re checking Twitter constantly, it’s very easy to miss something important.

    That’s what makes Twitter’s newest timeline tweak smart. The social network has begun to roll out a new ‘While You Were Away’ feature that automatically surfaces tweets that you might have missed to the top of your timeline.

    Here’s how it works, from Twitter:

    “A lot can happen while you’re on the go. To fill in some of those gaps, we will surface a few of the best Tweets you probably wouldn’t have seen otherwise, determined by engagement and other factors. If you check in on Twitter now and then for a quick snapshot of what’s happening, you’ll see this recap more often; if you spend a lot of time on Twitter already, you’ll see it less.”

    The ‘While You Were Away’ feature is now live on iOS, and is coming soon to Android and desktop.

    Is this the Facebook-ization of Twitter? Not really. Yes, it’s an algorithmic tweak (and Twitter’s not too forthcoming on how it’s going to work – other factors?), but Twitter’s not deciding what content you see or more importantly – what content you don’t see.. It’s all still there in your timeline if you want to scroll through it. Twitter’s just surfacing tweets that it thinks you’d be sad to have missed. How sweet of you, Twitter.

    Of course, this could all change when Twitter is 100 percent sure than while I was away, I wanted to see an ad for Chobani yogurt.

    Twitter’s been making small changes to the timeline for months now, and we knew this one was coming. This is not a big algorithmic shift to Twitter that has been discussed and is mostly feared. What it is is a pretty smart way to keep people who aren’t constantly checking Twitter engaged.

    Image via Twitter

  • If Online Media Is Decaying, Is Facebook to Blame?

    You’d probably laugh if some higher up at Little Caesars posted an epic diatribe blasting the state of modern pizza. You might scratch your head if this exec, let’s call him Bob, ranted about lesser quality pizza, and how the only thing we get anymore is the lowest common denominator pizza–the cheapest, flashiest, most easily digestible pizza that requires the least amount of effort. You’d most likely stare at Bob, with a look of unadulterated bewilderment as he lamented the dumbing down of pizza to its current easily palatable, but otherwise unfulfilling form.

    You’d say But Bob, you’re part of the problem! Bob, you may just be the whole damn problem!

    Mike Hudack is a Director of Product Management at Facebook. Recently, he posted a 461-word teardown of the current state of media. Below, I’ll post the whole thing–but to sum it up, it goes a little something like this:

    All we have these days is lesser quality journalism. The only thing we get anymore is the lowest common denominator journalism–the cheapest, flashiest, most easily digestible journalism that requires the least amount of effort. Journalism has been dumbed down to its current easily palatable, but otherwise unfulfilling form. We used to have live reporting from Baghdad on CNN, and now all we see are BuzzFeed listicles.

    How much do you think Facebook contributes to the issues Hudack brings up? Let us know in the comments.

    Mike Hudack is not wrong about this–being totally off base is not what makes this post incredible in every single way. What does that is the fact that he works for Facebook.

    It’s hard to tell who’s to blame. But someone should fix this shit., he says.

    I’m going to need someone who studies cognitive dissonance to explain this to me before I blow a fuse trying to wrap my temperamental little writer brain around it.

    Here’s the entire post, for context:

    Please allow me to rant for a moment about the state of the media.

    It’s well known that CNN has gone from the network of Bernie Shaw, John Holliman, and Peter Arnett reporting live from Baghdad in 1991 to the network of kidnapped white girls. Our nation’s newspapers have, with the exception of The New York Times, Washington Post and The Wall Street Journal been almost entirely hollowed out. They are ghosts in a shell.

    Evening newscasts are jokes, and copycat television newsmagazines have turned into tabloids — “OK” rather than Time. 60 Minutes lives on, suffering only the occasional scandal. More young Americans get their news from The Daily Show than from Brokaw’s replacement. Can you even name Brokaw’s replacement? I don’t think I can.

    Meet the Press has become a joke since David Gregory took over. We’ll probably never get another Tim Russert. And of course Fox News and msnbc care more about telling their viewers what they want to hear than informing the national conversation in any meaningful way.

    And so we turn to the Internet for our salvation. We could have gotten it in The Huffington Post but we didn’t. We could have gotten it in BuzzFeed, but it turns out that BuzzFeed’s homepage is like CNN’s but only more so. Listicles of the “28 young couples you know” replace the kidnapped white girl. Same thing, different demographics.

    We kind of get it from VICE. In between the salacious articles about Atlanta strip clubs we get the occasional real reporting from North Korea or Donetsk. We celebrate these acts of journalistic bravery specifically because they are today so rare. VICE is so gonzo that it’s willing to do real journalism in actually dangerous areas! VICE is the savior of news!

    And we come to Ezra Klein. The great Ezra Klein of Wapo and msnbc. The man who, while a partisan, does not try to keep his own set of facts. He founded Vox. Personally I hoped that we would find a new home for serious journalism in a format that felt Internet-native and natural to people who grew up interacting with screens instead of just watching them from couches with bags of popcorn and a beer to keep their hands busy.

    And instead they write stupid stories about how you should wash your jeans instead of freezing them. To be fair their top headline right now is “How a bill made it through the worst Congress ever.” Which is better than “you can’t clean your jeans by freezing them.”

    The jeans story is their most read story today. Followed by “What microsoft doesn’t get about tablets” and “Is ’17 People’ really the best West Wing episode?”

    It’s hard to tell who’s to blame. But someone should fix this shit.

    Take a look at your Facebook news feed real quick. What do you see? Do you see dozens of links to long-form stories about veterans affairs or the current unrest in Ukraine? Do you see links to thought pieces on pathways to renewable energy? Do you see stories about immigrants, wages, and economic disparity?

    No, of course you don’t. You see “27 Types of Poop, and What They Really Mean.”

    Why? Because that’s what works on Facebook. If you’re a website the produces original content, you rely (to a varying degree) on posts “going viral.” How do things “go viral”? They get shared on Facebook. And what kind of content gets shared on Facebook? Poop lists.

    Sure, this is sort of an indictment of your friends and the human race in general. Poop lists are what people want to see. But more importantly, this is about Facebook and the complex, carefully guarded news feed algorithms that govern what we all see and how they choose to reward the very content that has Mike Hudack so incensed.

    “Hey, Mike, I just sent you a tweetstorm, but let me reproduce it here: My perception is that Facebook is *the* major factor in almost every trend you identified. I’m not saying this as a hater, but if you asked most people in media why we do these stories, they’d say, ‘They work on Facebook.’ And your own CEO has even provided an explanation for the phenomenon with his famed quote, ‘A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.’ This is not to say we (the (digital) media) don’t have our own pathologies, but Google and Facebook’s social and algorithmic influence dominate the ecology of our world,” says The Atlantic’s Alexis Madrigal in a reply to his rant.

    “And we (speaking for ALL THE MEDIA) would love to talk with Facebook about how we can do more substantive stuff and be rewarded. We really would. It’s all we ever talk about when we get together for beers and to complain about our industry and careers.”

    Or, more succinctly, as Valleywag’s Sam Biddle put it: “Sorry, but Facebook is why BuzzFeed is the way it is.”

    I think we can all agree that it would go a long way to solving the issues that Hudack laments if Facebook would lead the charge in promoting high-quality content.

    Here’s the thing–they might not be able to, even if they wanted to.

    If you own a Facebook page, you’re all-too familiar with their big News Feed algorithm changes that took place back in December. Whatever Facebook tweaked, it slashed organic reach for tons of pages. Of course, many argued that the point of all this was to force people to “pay to play,” or boost their posts through paid reach. Facebook vehemently denied this, saying that it was all about pushing higher quality content.

    “Our surveys show that on average people prefer links to high quality articles about current events, their favorite sports team or shared interests, to the latest meme. Starting soon, we’ll be doing a better job of distinguishing between a high quality article on a website versus a meme photo hosted somewhere other than Facebook when people click on those stories on mobile. This means that high quality articles you or others read may show up a bit more prominently in your News Feed, and meme photos may show up a bit less prominently,” said Engineering Manager Varun Kacholia at the time.

    Ok, so if Facebook is committed to pushing more high quality content, then why are we all still seeing so many Poop Lists?

    Turns out, Facebook’s algorithms are simply bad at determining quality.

    WebProNews’ own Chris Crum explained this beautifully back in February, while discussing why BuzzFeed is able to get so much Facebook traffic when you can’t:

    What this is really about is likely Facebook’s shockingly unsophisticated methods for determining quality. The company has basically said as much. Peter Kafka at All Things D (now at Re/code) published an interview with Facebook News Feed manager Lars Backstrom right after the update was announced.

    He said flat out, “Right now, it’s mostly oriented around the source. As we refine our approaches, we’ll start distinguishing more and more between different types of content. But, for right now, when we think about how we identify “high quality,” it’s mostly at the source level.”

    That’s what it comes down to, it seems. If your site has managed to make the cut at the source level for Facebook, you should be good regardless of how many GIF lists you have in comparison to journalistic stories. It would seem that BuzzFeed had already done enough to be considered a viable source by Facebook, while others who have suffered major traffic hits had not. In other words, Facebook is playing favorites, and the list of favorites is an unknown.

    Just to make this point clear, Kafka asked in that interview, “So something that comes from publisher X, you might consider high quality, and if it comes from publishers Y, it’s low quality?”

    Backstrom’s answer was simply, “Yes.”

    So as long as BuzzFeed is publisher X, it can post as many poop lists as it wants with no repercussions, apparently. It’s already white-listed. Meanwhile, you can be publisher Y and break the news about the next natural disaster, and it means nothing. At least not until Facebook’s methods get more sophisticated.

    Not only that, but Hudack’s sharing of the Vox.com article that apparently served as the catalyst for his irony-soaked post is likely part of the problem as well. Here, he shares the article to talk shit about it, but does it really matter why he shared it? He shared it. It’s the circle of strife.

    Hudack has responded to many comments on his post, saying that he acknowledges Facebook could do more to stop “sending traffic to shitty listicles,” and that there are people within the company that would like to see a change:

    This thread is awesome. Really awesome. I don’t work on Newsfeed or trending topics, so it’s hard for me to speak authoritatively about their role in the decline of media. But I’d argue that 20/20 turned into “OK” before Facebook was really a thing, and CNN stopped being the network reporting live from Baghdad before Facebook became a leading source of referral traffic for the Internet,” he says.

    “Is Facebook helping or hurting? I don’t honestly know. You guys are right to point out that Facebook sends a lot of traffic to shitty listicles. But the relationship is tautological, isn’t it? People produce shitty listicles because they’re able to get people to click on them. People click on them so people produce shitty listicles. It’s not the listicles I mind so much as the lack of real, ground-breaking and courageous reporting that feels native to the medium. Produce that in a way that people want to read and I’m confident that Facebook and Google and Twitter will send traffic to it.

    And, to be clear, there are many people at Facebook who would like to be part of the solution and not just part of the problem. I’m sure that we’re all open to ideas for how we could improve the product to encourage the distribution of better quality journalism. I, for one, am all ears.”

    He also acknowledges a “filter bias”:

    “[C]ulture matters. I also want to emphasize that I don’t work on Feed or Trending Topics, and that I speak for myself and not the company or the folks who work hard to make Feed and Trending Topics better. I know those guys, but I can’t speak for them. I understand that there are such things as filter bias, and I think that you’re right that all of the players in the ecosystem can get better at this.”

    In the end, however, Hudack says his comments were about about a true degradation in reporting quality.

    “What I’m seeing, though, is a general degradation of real reporting. Perhaps it’s easier to imagine me as an aging newspaperman looking around the newsroom saying “Where did the Woodwards and Bernsteins go?”

    Of course, no one entity is to blame for the fact that your newsfeed is littered with listicles and articles about freezing your jeans. But you can’t advertise $5 Hot-n-Ready pizzas and then bitch when pizza everywhere tastes like cardboard, ketchup, and Velveeta.

    What do you think? Is Facebook to blame? What can be done about it? Would it help if Facebook were more open about how their news feed algorithms work, or do they simply need to get better at filtering high-quality content? Let us know in the comments.

    Image via Facebook Menlo Park, Facebook

  • Facebook Sociology: You’re More Likely to Post a Status If You See a Bunch of Statuses from Friends

    Facebook is always screwing around with their news feed algorithms, or what they call “trying to show you better, more relevant content.” It’s not like Facebook isn’t constantly tweaking this, but they’ve been doing it a lot lately – or at least being a lot more forthcoming about the process.

    Did you know that Facebook users are a bunch of sheep? Probably. But seriously, there’s a giant snowball effect when it comes to posting statuses. Apparently, according to Facebook’s internal testing, you’re much more likely to post a status if you see a bunch of statuses from your friends. Check this little excerpt from a recent Facebook blog post:

    The goal of every update to News Feed is to show people the most interesting stories at the top of their feed and display them in the best way possible. We regularly run tests to work out how to make the experience better. Through testing, we have found that when people see more text status updates on Facebook they write more status updates themselves. In fact, in our initial test when we showed more status updates from friends it led to on average 9 million more status updates written each day. Because of this, we showed people more text status updates in their News Feed.

    That’s rather interesting. I wonder if the same is true for photos, links, etc.

    The context for this comes in another Facebook algorithm change announced in aforementioned blog post. Facebook is going to start showing users less text status updates from pages, instead opting for the “link share” style posts. That’s because they didn’t see the same kind of “status snowball” effect when those status came from pages – just when they came from friends.

    This comes on the heels of Facebook’s recent enormous algorithm change, one that’s been likened to Google’s giant Panda update.

    Image via Jennifer Conley, Flickr Creative Commons

  • Facebook Promises More Relevant Ads with Algorithm Tweak

    Ads are just a part of your Facebook news feed now. Whether they’re sponsored stories, page post ads, suggested posts, or whatever – the point is that they are there and they’re not going anywhere.

    With that in mind, Facebook says that they want to make the ads you see more relevant to your interests. To do that, the company is announcing an update to their algorithms that they say will enhance both the relevance and quality of every sponsored post you see on the site.

    “We are currently working on some updates to the ads algorithm to improve the relevance and quality of the ads people see. When deciding which ad to show to which groups of people, we are placing more emphasis on feedback we receive from people about ads, including how often people report or hide an ad,” says Facebook engineering manager Hong Ge.

    Facebook learns about which ads are more relevant to you based on your interactions with them – clicks, likes and shares. But they also make this determination based on more direct feedback – like when you hide an ad from your news feed.

    What Facebook says they’re doing with this algorithm tweak is to put more focus on that part of it, when you tell them that you don’t want to see this kind of ad anymore.

    If you’re marketing on Facebook and actively running campaigns, Facebook warns that you may see a difference in your numbers in the coming weeks.

    “This means that some marketers may see some variation in the distribution of their ads in the coming weeks. Our goal is to make sure we deliver the most relevant ads, which should mean the right people are seeing a specific ad campaign. This is ultimately better for marketers, because it means their messages are reaching the people most interested in what they have to offer.”

    They also say that these changes won’t just be better for users, but it’ll help them direct marketers’ ads to people that are actually receptive to them.

  • StumbleUpon Looks To Get Better At Determining When Content Is Evergreen

    StumbleUpon has an engineering contest going on, aimed at finding some help improving how it recommends content. Specifically, they’re looking for someone to develop an algorithm to make the service better at determining when a piece of content is evergreen (meaning, having relevance over a longer period of time), and when something is better off only being shown to users in a more timely fashion.

    StumbleUpon has turned to Kaggle to launch the contest. Kaggle, for those unfamiliar, is a community site for data scientists. Through its Kaggle Connect platform, these scientists can compete with each other to solve data science problems put to them.

    The prize for StumbleUpon’s challenge is $5,000 and a possible internship with the company. Frankly, if you can solve this problem for StumbleUpon, it seems like a safe bet they would want you as part of the team. The company did recently undergo a significant downsizing, but they’re now actively hiring.

    A spokesman for StumbleUpon tells us that the service currently doesn’t consider specific categories to be evergreen or non-evergreen. As it stands, StumbleUpon simply relies on users’ ratings (thumbs up/down) to determine whether or not a piece of content is evergreen.

    “Basically, any topic can have evergreen content,” he says.

    In other words, it really doesn’t matter if a piece of content is submitted to the News category. It can still have long legs.

    “With this contest, we hope to develop an algorithm to analyze the page itself which will let us know how evergreen it may be before we serve it to the user,” the spokesman says.

    StumbleUpon says on the Kaggle page, “Many people know evergreen content when they see it, but can an algorithm make the same determination without human intuition? Your mission is to build a classifier which will evaluate a large set of URLs and label them as either evergreen or ephemeral. Can you out-class(ify) StumbleUpon? As an added incentive to the prize, a strong performance in this competition may lead to a career-launching internship at one of the best places to work in San Francisco.”

    The contest began on August 16th, and will run through October 31st. After that, it remains to be seen if StumbleUpon will find its problem solved, and if so, how long it will take for the algorithm to be implemented.

    Image: Kaggle

  • When And Where You Watch Something On Netflix May Soon Play A Role In What You Watch

    If you use Netflix regularly to stream movies and shows, then there’s a really good chance that its recommendations play a pretty big part in your viewing habits. This will likely be even more the case from now on, now that the user profiles are rolling out. Now, you won’t have the distractions of what Netflix thinks other people in your house want to watch. It’s going to be more personal than ever.

    Wired spoke with a couple of Netflix engineers, producing a rather interesting look into the kinds of things Netflix takes into consideration when determining what to show users as recommendations. It turns out, as you might have guessed, that they use a lot of different data points related to your usage habits. I say usage, because it’s not just about viewing. In some cases, it’s literally about how you interact with the Netflix interface (in addition, of course, to your viewing habits).

    “We know what you played, searched for, or rated, as well as the time, date, and device,” explains engineering director Xavier Amatriain. “We even track user interactions such as browsing or scrolling behavior. All that data is fed into several algorithms, each optimized for a different purpose. In a broad sense, most of our algorithms are based on the assumption that similar viewing patterns represent similar user tastes. We can use the behavior of similar users to infer your preferences.”

    He also says Netflix is working on incorporating viewing time data into the recommendation algorithms.

    “We have been working for some time on introducing context into recommendations,” Amatriain tells Wired. “We have data that suggests there is different viewing behavior depending on the day of the week, the time of day, the device, and sometimes even the location. But implementing contextual recommendations has practical challenges that we are currently working on. We hope to be using it in the near future.”

    Another interesting bit of the interview has Carlos Gomez-Uribe, VP of product innovation and personalization algorithms at Netflix saying that “predicted ratings aren’t actually super-useful.” This, of course, was what the famed “Netflix Prize” was based on.

    Funny how things change.

  • New MIT Algorithm Predicts Twitter Trends Hours in Advance

    Researchers at the Massachusetts Institute of Technology (MIT) have announced a new algorithm they say is capable of predicting Twitter trends far in advance.

    The algorithm is claimed to predict with 95% accuracy the topics that will show up on Twitter’s trending topics list. It can make these predictions an average of an hour and a half before Twitter lists the topic as a trend, and can sometimes predict trends as much as four or five hours in advance.

    Devavrat Shah, associate professor in the electrical engineering and computer science department at MIT, and MIT graduate student Stanislav Nikolov will present the algorithm at the Interdisciplinary Workshop on Information and Decision in Social Networks in November.

    Shah stated that the algorithm is a nonparametric machine-learning algorithm, meaning it makes no assumptions about the shape of patterns. It compares changes over time in the number of tweets about a new topic to the changes over time seen in every sample in the training set. Also, training set samples with statistics similar to the new topic are more heavily weighted when determining a prediction. Shah compared it to voting, where each sample gets a vote, but some votes count more than others.

    This method is different from the standard approach to machine learning, where researchers create a model of the pattern whose specifics need to be inferred. In theory, the new approach could apply to any quantity that varies over time (including the stock market), given the right subset of training data.

    For Shah and Nikolov’s initial experiments, they used data from 200 Twitter topics that were listed as trends and 200 that were not. “The training sets are very small, but we still get strong results,” said Shah. In addition to the algorithm’s 95% prediction rate, it also had only a 4% false-positive rate.

    The accuracy of the system can increase with additional training sets, but the computing costs will also increase. However, Shah revealed that the algorithm has been designed to execute across separate machines, such as web servers. “It is perfectly suited to the modern computational framework,” said Shah.

    “It’s very creative to use the data itself to find out what trends look like,” said Ashish Goel, associate professor of management science at Stanford University and a member of Twitter’s technical advisory board. “It’s quite creative and quite timely and hopefully quite useful.

    “People go to social-media sites to find out what’s happening now. So in that sense, speeding up the process is something that is very useful.”

    (Image courtesy MIT)

  • Science Finally Proves That Justin Bieber Sucks (really, pop music in general)

    Radiohead’s Thom Yorke famously said that “the reason that people pirate, is they want access to good music. And they don’t get it because the radio is so shit.”

    Well, the blame may not rest entirely on the radio stations – they may, in reality, truly have no choice in the quality of music they play. That’s because new research has confirmed that as the years have gone by, popular music has gotten worse and worse. Specifically, louder and less original, which is a longer way of saying it sucks.

    The research comes from an AI specialist named Joan Serra at the Spanish National Research Council (CSIC). CSIC is the largest public institution of research in Spain, and the third largest in all of Europe – so let’s throw any arguments about legitimacy out the window.

    Serra and the team used “complex algorithms” to process pop music from the last 55 years (1955-2010) – from Elvis to Lady Gaga. To do this, they used the Million Song Dataset, a “freely-available collection of audio features and metadata for a million contemporary popular music tracks,” whose purpose is to aid in research projects just like this one.

    “We found evidence of a progressive homogenization of the musical discourse,” Serra told Reuters. “In particular, we obtained numerical indicators that the diversity of transitions between note combinations – roughly speaking chords plus melodies – has consistently diminished in the last 50 years.”

    Translation: It’s all the bloody same.

    Reuters says that the team also found that the “timbre palette” has gotten worse over the years.

    Translation: The actual sounds are all the bloody same.

    Not only is modern pop music swimming in a vomitous sea of sameness, but it’s also getting louder.

    Now, when you’re having an argument with your 14-year-old cousin about her Bieber fever and his intrinsic talent, or how he’s fulfilling Kurt Cobain’s legacy or some shit, all you have to do is say that science has proven that he sucks.

    Now, to be fair, one could make the argument that pop music has all been the same for the last half a century. Maybe the latest generation is worse, but in reality it really is all the same. For your consideration:

  • The Algorithm That Leads To The Robotic Revolution Has Been Found

    The Algorithm That Leads To The Robotic Revolution Has Been Found

    It was both charming and frightening when Google made the robot brain that learned what a cat was just by watching videos of them on YouTube. The kind of artificial intelligence that makes these feats possible will obviously be the downfall of man at the hands of the robots. Despite the warnings of many paranoid people, science continues to march forward towards our eventual extinction.

    It all starts with an algorithm proposed by Dr. Łukasz Kaiser of Universite Paris Diderot. The algorithm proposes that machines can learn how anything works just by watching how it works. It’s like if Google’s cat loving computer learned what a cat was and then learned how to care for a cat by watching more videos of these actions.

    This algorithm is not being tested on cats though. The tests are currently centered around games and learning how to play said games. The hope is that a computer can watch people playing a game like Connect 4 and then learn how to beat a human opponent just by watching them. Sure, machines can beat humans in Chess, but the machine has to be programmed by a human with all the potential moves available to it.

    What makes Dr. Kaiser’s research so fascinating, and terrifying, is that machines would only have to watch to learn. We as humans learn by observing things around us and we’re giving machines that same ability. Of course, now we have to discuss machine rights and whether or not I’m a machinist. Look, I don’t hate machines, but I would not like to be killed by something that can’t feel basic emotions.

    If you want to learn more about the downfall of humanity, check out Dr. Kaiser’s presentation on his research at the Third Conference on Artificial General Intelligence.

    Lukasz Kaiser-Playing General Structure Rewriting Games from Raj Dye on Vimeo.

    [h/t: Wired]

  • MIT Brains Work On “Smart Sand” Robots

    Researchers at MIT are working on a project that could bring sci-fi fantasies to reality. But, then again, when aren’t they?

    Nowadays, if you want something built, you take wood or other materials and build or cut it out of that. But, what if you could have a computer model of what you want, and have that thing magically appear out of a box of sand?

    That is the very vision the brains at the Distributed Robotics Laboratory (DRL) at MIT’s Computer Science and Artificial Intelligence Laboratory are pursuing. It involves a lot of programming and seemingly-simple twiddling, but it could change the way things are made with the same kind of promise that 3-D printing has people so excited about. The development is called “smart sand”.

    From the MITNews:

    At the IEEE International Conference on Robotics and Automation in May — the world’s premier robotics conference — DRL researchers will present a paper describing algorithms that could enable such “smart sand.” They also describe experiments in which they tested the algorithms on somewhat larger particles — cubes about 10 millimeters to an edge, with rudimentary microprocessors inside and very unusual magnets on four of their sides.

    Unlike many other approaches to reconfigurable robots, smart sand uses a subtractive method, akin to stone carving, rather than an additive method, akin to snapping LEGO blocks together. A heap of smart sand would be analogous to the rough block of stone that a sculptor begins with. The individual grains would pass messages back and forth and selectively attach to each other to form a three-dimensional object; the grains not necessary to build that object would simply fall away. When the object had served its purpose, it would be returned to the heap. Its constituent grains would detach from each other, becoming free to participate in the formation of a new shape.

    Of course, ten-millimeter cubes is hardly what you would call “sand”, but the idea is to get the functionality and algorithms in place, then shrink the size of it over time.

    “Take the core functionalities of their pebbles,” says [Robert] Wood, who directs Harvard’s Microrobotics Laboratory. “They have the ability to latch onto their neighbors; they have the ability to talk to their neighbors; they have the ability to do some computation. Those are all things that are certainly feasible to think about doing in smaller packages.”

    “It would take quite a lot of engineering to do that, of course,” Wood cautions. “That’s a well-posed but very difficult set of engineering challenges that they could continue to address in the future.”

    This video gives you an idea of the “subtractive” methods of building that the MIT folks are working on.

  • Computer Models Help Predict Dementia Patterns

    Researchers at Weill Cornell Medical College have developed a computer program that has tracked the manner in which different forms of dementia spread within a human brain. They say their mathematic model can be used to predict where and approximately when an individual patient’s brain will suffer from the spread, neuron to neuron, of “prion-like” toxic proteins — a process they say underlies all forms of dementia.

    Their findings, published in the March 22 issue of Neuron, could help patients and their families confirm a diagnosis of dementia and prepare in advance for future cognitive declines over time. In the future — in an era where targeted drugs against dementia exist — the program might also help physicians identify suitable brain targets for therapeutic intervention, says the study’s lead researcher, Ashish Raj, Ph.D., an assistant professor of computer science in radiology at Weill Cornell Medical College.

    “Think of it as a weather radar system, which shows you a video of weather patterns in your area over the next 48 hours,” says Dr. Raj. “Our model, when applied to the baseline magnetic resonance imaging scan of an individual brain, can similarly produce a future map of degeneration in that person over the next few years or decades.

    “This could allow neurologists to predict what the patient’s neuroanatomic and associated cognitive state will be at any given point in the future. They could tell whether and when the patient will develop speech impediments, memory loss, behavioral peculiarities, and so on,” he says. “Knowledge of what the future holds will allow patients to make informed choices regarding their lifestyle and therapeutic interventions.

    “At some point we will gain the ability to target and improve the health of specific brain regions and nerve fiber tracts,” Dr. Raj says. “At that point, a good prediction of a subject’s future anatomic state can help identify promising target regions for this intervention. Early detection will be key to preventing and managing dementia.”

    The computational model, which Dr. Raj developed, is the latest, and one of the most significant, validations of the idea that dementia is caused by proteins that spread through the brain along networks of neurons. It extends findings that were widely reported in February that Alzheimer’s disease starts in a particular brain region, but spreads further via misfolded, toxic “tau” proteins. Those studies, by researchers at Columbia University Medical Center and Massachusetts General Hospital, were conducted in mouse models and focused only on Alzheimer’s disease.

    In this study, Dr. Raj details how he developed the mathematical model of the flow of toxic proteins, and then demonstrates that it correctly predicted the patterns of degeneration that results in a number of different forms of dementia.

    He says his model is predicated on the recent understanding that all known forms of dementia are accompanied by, and likely caused by, abnormal or “misfolded” proteins. Proteins have a defined shape, depending on their specific function — but proteins that become misshapen can produce unwanted toxic effects. One example is tau, which is found in a misfolded state in the brains of both Alzheimer’s patients and patients with frontal temporal dementia (FTD). Other proteins, such as TDP43 and ubiquitin, are also found in FTD, and alpha synuclein is found in Parkinson’s disease.

    These proteins are called “prion-like” because misfolded, or diseased, proteins induce the misfolding of other proteins they touch down a specific neuronal pathway. Prion diseases (such as mad cow disease) that involve transmission of misfolded proteins are thought to be infectious between people. “There is no evidence that Alzheimer’s or other dementias are contagious in that way, which is why their transmission is called prion-like.”

    Dr. Raj calls his model of trans-neuronal spread of misfolded proteins “very simple.” It models the same process by which any gas diffuses in air, except that in the case of dementias the diffusion process occurs along connected neural fiber tracts in the brain.

    “This is a common process by which any disease-causing protein can result in a variety of dementias,” he says.

    The model identifies the neural sub-networks in the brain into which misfolded proteins will collect before moving on to other brain areas that are connected by networks of neurons. In the process the proteins alter normal functioning of all brain areas they visit.

    “What is new and really quite remarkable is the network diffusion model itself, which acts on the normal brain connectivity network and manages to reproduce many known aspects of whole brain disease patterns in dementias,” Dr. Raj says. “This provides a very simple explanation for why different dementias appear to target specific areas of the brain.”

    In the study, he was able to match patterns from the diffusion model, which traced protein disbursal in a healthy brain, to the patterns of brain atrophy observed in patients with either Alzheimer’s disease or FTD. This degeneration was measured using MRI and other tools that could quantify the amount of brain volume loss experienced in each region of the patient’s brain. Co-author Amy Kuceyeski, Ph.D., a postdoctoral fellow who works with Dr. Raj, helped analyze brain volume measurements in the diseased brains.

    “Our study demonstrates that such a spreading mechanism leads directly to the observed patterns of atrophy one sees in various dementias,” Dr. Raj says. “While the classic patterns of dementia are well known, this is the first model to relate brain network properties to the patterns and explain them in a deterministic and predictive manner.”

  • YouTube Now Recommends Videos Based On Engagement, Not Just Clicks

    How many times have you finished watching a YouTube video, then clicked on a suggested video only to end up watching something that doesn’t interest you at all. YouTube is hoping to cut down on this scenario with a change to how they select these videos.

    Starting today, YouTube’s algorithm for determining what videos appear on your suggested or recommended sections is morphing. Instead of picking these videos according to view count only, YouTube will now populate this suggestions section based on time spent viewing each video as well.

    Here’s the reasoning behind this, courtesy of the Official YouTube Partners and Creators blog:

    The last time you went channel surfing, did you enjoy (or remember) the 20 TV shows you flipped through, or just the shows you watched all the way through? Would you recommend the 20 you surfed through to a friend, or the ones you actually watched? To make the videos you watch on YouTube more enjoyable, memorable, and sharable, we’re updating our Related and Recommended videos to better serve videos that keep viewers entertained.

    This makes sense. How engaged with a video the average user is should determine whether it is passed along as a recommended video a particular user. Another, possibly more important point of this move is to cut back on people gaming the system. YouTube elaborates on the YouTube Help site

    Previously, the YouTube algorithm suggested videos (whether related videos on the watch page or recommended videos elsewhere on the site) based on how many people clicked to watch a video. This was a helpful way to promote channels, but issues like misleading thumbnails kept this system from bringing videos with deeper engagement to the top. Starting March 14th 2012, the algorithm for suggesting videos will also be based on which videos contribute to a longer overall viewing session rather than how many clicks an individual video receives. This is great for viewers because they’ll be able to watch more enjoyable content; moreover, this is great for creators because it can help build more focused and engaged audiences.

    They mention “misleading thumbnails,” which is specification of a number of techniques creators use to boost the views on their particular videos. One of the most prominent examples is the “reply girl.” Users (usually female) will upload their videos as replies to popular YouTube videos, and most of the time show something sexually suggestive or otherwise unrelated image as the thumbnail in order to draw clicks.

    Basically, what YouTube is saying here is you have to make your content relevant and it has to be something that people want to watch. If a user clicks your video and only stays for a few seconds because it doesn’t satisfy what they were looking for, you’re going to have a harder time making it to the recommended and suggested video sections.

    “How can you adapt to these changes?” they say. “The same as you always have — create great videos that keep people engaged. It doesn’t matter whether your videos are one minute or one hour. What matters is that your audience stops clicking away and starts watching more of your videos.”

  • Google Algorithm Testing – Search Giant Calls for Help Detecting Scrapers

    Google Algorithm Testing – Search Giant Calls for Help Detecting Scrapers

    Google announced that it is testing algorithmic changes for scraper sites – blog scrapers in particular. The company is calling on users to help them.

    “We are asking for examples, and may use data you submit to test and improve our algorithms,” the company says on a “Report Scraper Pages” form, found here.

    Google’s head of web spam, Matt Cutts, tweeted about the new initiative:

    Scrapers getting you down? Tell us about blog scrapers you see: http://t.co/6HPhROS We need datapoints for testing. 1 day ago via Tweet Button · powered by @socialditto

    This testing comes after months of iterations of Google’s Panda Update, designed to improve the quality of search results, though there has been no shortage of complaints about scrapers ranking over original content in that time.

    The testing also follows a recent, big refresh of Google’s spam submission process, discussed here.

    This past week, Google shared an interesting video, providing an inside look at the search algorithm tweaking process. While no earth shattering information was necessarily contained, it did provide a rare visual glimpse into the process. Watch it below.

  • Google Provides Inside Look Into Algorithm Tweaking Process

    Google Provides Inside Look Into Algorithm Tweaking Process

    Google tweaks its search algorithms over 500 times a year. You may have already known that, but Google is sharing a new video today designed to give people a “deeper look” into how Google makes “improvements” to its algorithms.

    I don’t think everyone would agree that they’ve all been improvements, as we see complaints about this every day, but Google makes a lot of changes aimed at improving search results, nevertheless.

    “There are almost always a set of motivating searches, and these searches are not performing as well as we’d like,” says Engineering Director Scott Huffman. “Ranking engineers then come up with a hypothesis about what signal, what data could we integrate into our algorithm.”

    The video does provide some unique behind the scenes footage of Google engineers plugging away on their computers, presumably working on the algorithms.

    Google's Search Team Works on Algorithm

    Google's Search Team Works on Algorithm

    Google briefly talks about the process of raters. “These are external people that have been trained to judge whether one ranking is more relevant and higher quality than another,” says software engineer Mark Paskin.

    “We show these raters a side-by-side for queries that the engineer’s experiment might be affecting,” explains Google Search Scientist Rajan Patel. “We also confirm these changes with live experiments on real users.”

    “We do this in something called a sandbox. We send a very small fraction of actual Google traffic to the sandbox. We compute lots of different metrics,” says Paskin.

    “In 2010, we ran over 20,000 different experiments. All the data from the human evaluation and the live experiment are then rolled out by a search analyst,” says Huffman.

    Sangeeta Das, a quantitative analyst says, “For each project, it’s usually one analyst assigned from the moment that we’re talking to the engineers, trying to learn about their change.”

    “We then have a launch decision meeting where the leadership of the search team then looks at that data and makes a decision,” says Huffman.

    Search Meeting

    Search Meeting

    Search Meeting

    “Ultimately, the goal of the search eval analyst team is to provide informed, data-driven decision, and present an unbiased view” says Das.

    “If our scientific testing says this is a good idea for Google users, we will launch it on Google,” says Google Fellow Amit Signhal.

    The video then looks at the “did you mean” and “showing results for” features as an example.

  • Human Content Creation Still Safe For The Time Being

    An article at Harvard Business Review takes an interesting look at “seven things human editors do that algorithms don’t (yet).” They boil down to: anticipation, risk-taking, the whole picture, pairing, social importance, mind-blowingness, and trust.

    Clearly there’s still room for humans on the web. In search, that’s good news for Blekko, which brings the old human-edited approach back into the mix of an industry that has largely been dominated by the algorithm for the last decade, though the jury’s still out on whether it will ever be as effective as Google.

    In terms of content creation, we’ve already seen the beginnings of what the algorithm can do. Look at Demand Media’s business model (at least for the content portion) – it’s largely algorithm based, though it still uses humans to write and edit the content.

    The future content farm may be a different story though. We’ve also seen the absence of human intervention in content creation. Look at what Narrative Science is doing. The company, run by a former DoubleClick executive, describes itself in the following manner:

    “We tell the story behind the data. Our technology identifies trends and angles within large data sources and automatically creates compelling copy. We can build upon stories, providing deeper context around particular subjects over time. Every story is generated entirely from scratch and is always unique. Our technology can be applied to a broad range of content categories and we’re branching into new areas every day.” 

    Look at what IBM has been able to accomplish through machine learning with its robot Watson. How long until a bunch of Watsons are creating content for the web (and creating other Watsons, for that matter)?

    The good news is it might still be a while before robots replace us all. Back to the points made in the Harvard Business Review, by Eli Pariser, he notes that algorithms aren’t yet good at predicting future news, to the extent that humans are.

    As far as risk-taking, “Chris Dixon, the co-founder of personalization site Hunch, calls this “‘he Chipotle problem,’ he writes. “As it turns out, if you are designing a where-to-eat recommendation algorithm, it’s hard to avoid sending most people to Chipotle most of the time. People like Chipotle, there are lots of them around, and while it never blows anyone’s mind, it’s a consistent three-to-four-star experience. Because of the way many personalization and recommendation algorithms are designed, they’ll tend to be conservative in this way — those five-star experiences are harder to predict, and they sometimes end up ones. Yet, of course, they’re the experiences we remember.”

    Would you trust content created by algorithms or do you put your trust in humans?

  • Google Activity That May Have an Impact on Rankings

    There are currently some interesting happenings with Google search that webmasters may want to pay attention to. The company, which is always busy, has been making moves, which may greatly affect its flagship product – search. This is all in addition to everything the company is doing in social media, mobile, gaming, advertising and everything else (which all may have their own separate impacts on search).

    Have you noticed recent changes in your ranking? Tell us about it.

    Algorithm Change

    Google makes changes to its algorithm all the time, but when a change comes with an announcement, you know people are going to talk. On Friday, Google announced a tweak designed to surface multiple pages from a single site for relevant queries.

    "For queries that indicate a strong user interest in a particular domain, like [exhibitions at amnh], we’ll now show more results from the relevant site," says Google software engineer Samarth Keshava. "Prior to today’s change, only two results from www.amnh.org would have appeared for this query. Now, we determine that the user is likely interested in the Museum of Natural History’s website, so seven results from the amnh.org domain appear. Since the user is looking for exhibitions at the museum, it’s far more likely that they’ll find what they’re looking for, faster. The last few results for this query are from other sites, preserving some diversity in the results."

    Google Tweaks Algorithm

    Not all webmasters have been thrilled with this. "Brace yourselves! Another Mayday disaster coming," one person commented on our story about it.

    What do you think of this algorithm change? Comment here.

    Experimenting

    Just as the company frequently changes its algorithm, it also frequently experiments with different features, showing them to small sets of users before either turning them into full-fledged features or throwing them away. The jury’s still out on this one, but a new experiment has been spotted, which alters search results as you type your query.

    Think of this like autosuggest taking over the entire SERP. The video demonstrates:

    Again, this is only an experiment at this stage, and it may never make its way to the mainstream Google experience, but people are already expressing a great deal of concern about it (particularly with regards to queries that begin with words that could yield undesired NSFW results).

    My guess is that Google would have ways around that issue, but it remains to be seen if users/webmaters will have to deal with it. If the feature does come to fruition, this is something SEOs are going to have to consider, as it could have a big impact on the habits of searchers. You may, for example, want to optimize more for the earlier words in a longer key phrase, in addition to the key phrase itself. But, we’ll see.

    Should Google change search results as you type? Comment here.


    Google Crawling Sites From Numerous IPs

    Barry Schwartz at Search Engine Roundtable points to some discussion from SEOs in Webmasterworld, who have found for the first time that Googlebot is now crawling from several different IP addresses at the same time. One webmaster said, ". their fast activity notified me so I took a peek to see who was scraping the site… I’ve never seen Google spider so fast and from so many IP addresses, they were all valid Google ip’s but there was like 10-20 of them running at once."

    Acquisition

    Google acquires Like.com

    The other day, it was officially announced that Like.com has been acquired by Google. Like.com is a shopping search company offering visual search technology and an automated cross-matching system for clothing and other merchandise.

    At this point, it’s unclear what Google has planned for this technology, but it could very well affect search results for shopping queries, which means it could affect small business visibility for better or for worse. Shopping search is going to be an area of Google to keep an eye on.

    Have you noticed anything else interesting happening with Google search within the last week or so? Are you seeing things that are impacting your rankings? Let us know.