WebProNews

Tag: Stuff

  • Telegraph, Like Wikipedia, Keeps List Of Articles ‘Forgotten’ By Google

    The “right to be forgotten” mess continues to get even messier. At least one newspaper is actually removing articles that have been removed from Google because of the law, from its own site, while also writing articles about removing such articles.

    So here’s an example of not only why the law is inherently flawed, but also of how much time it’s wasting on pretty much everybody’s part.

    The Daily Telegraph, as described by Danny Sullivan at Marketing Land, has been on a “campaign to document all its stories that have been removed” as a result of the law. The Telegraph’s Mattthew Sparkes even tweeted about how he’s spending his time (which would no doubt be better used reporting actual news).

    The list referenced in that last tweet contains eight bullet points about articles and images removed.

    Similarly, Wikipedia is keeping a running tab of stories that have been removed by Google.

    In other words, people requesting articles be removed only seem to be drawing more attention to the fact that they’ve done so, which seems to defeat the entire purpose. Shocking, right?

    For more background on the “right to be forgotten” and Google’s role, peruse our coverage here.

    Image via Google

  • Microsoft Is Scanning Email for Child Porn Too

    Google’s not the only one automatically scanning your communications for traces of child porn.

    A Pennsylvania man has been arrested after he attempted to send child porn via a live.com email address. The arrest came after Microsoft tipped the National Center for Missing and Exploited Children about the illegal images, which officials subsequently found in the man’s OneDrive account.

    This shouldn’t come as a huge surprise to anyone who’s been paying attention. Like Google, Microsoft lets you know in its terms of service that automated processes are looking for child porn.

    “In many cases Microsoft is alerted to violations of the Code of Conduct through customer complaints, but we also deploy automated technologies to detect child pornography or abusive behavior that might harm the system, our customers, or others,” says Microsoft.

    Also, Microsoft has openly discussed this practice for years. This isn’t even the first instance of Microsoft alerting authorities to child porn. A little over a year ago, a Florida man was charged with 15 counts of child pornography possession after police found more the 3,000 images on his SkyDrive – thanks to a tip from Microsoft.

    But the conversation has been ramped up recently, thanks to the recent bust of a Texas man for emailing child porn – a bust facilitated by a Google tip.

    Google has spent the past few days explaining that their automated systems are looking for child porn, and only child porn.

    “It is important to remember that we only use this technology to identify child sexual abuse imagery, not other email content that could be associated with criminal activity (for example using email to plot a burglary),” said Google in one of the best corporate statements ever.

    Dedicated privacy hawks might argue that any sort of intrusion, whether well-intentioned or not, is unacceptable. But I’d be willing to bet that the majority of people are happy with child porn busts, and are ok with the automated scanning that facilitates them. Of course, this sentiment would quickly change if it turned out the scanning went far beyond this narrow scope. Other than ads, of course – we know Google scans message keywords to target ads.

    Image via Microsoft

  • Google Doesn’t Watch for Most Criminal Activity in Your Email, Just Child Porn

    Feel free to sell someone some drugs or plan a burglary (Google’s own words) with your Gmail – Google isn’t looking for that kind of criminal activity. Yes, Google is scanning your email, but it’s only to spot child porn (and serve you ads, of course).

    You probably heard that a Texas man was arrested and charged with possession of child pornography after Google (Google’s robots, more specifically) detected the images in the man’s Gmail. He was attempting to email the images to another user. When Google identified the images, they sent a tip to the National Center for Missing and Exploited children, who then notified the police. That led to a search warrant, which unearthed child porn on the man’s various devices.

    Naturally, this news elicited mixed reactions. First off, awesome – a child pornographer was caught. But also, privacy? Google’s looking at all of my email images?

    Well, of course they are. They’re not trying to hide it.

    “Our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received, and when it is stored.”

    That’s in Google’s TOS. They also use those automated systems to scan for illegal child porn images, and this isn’t the first time that they’ve tipped off authorities.

    But if you are worried about all the other kinds of illegal activity laid out in your emails – Google says don’t be.

    Here’s my favorite company response in recent memory:

    Sadly all Internet companies have to deal with child sexual abuse. It’s why Google actively removes illegal imagery from our services — including search and Gmail — and immediately reports abuse to NCMEC [The National Center for Missing & Exploited Children]. This evidence is regularly used to convict criminals.

    Each child sexual abuse image is given a unique digital fingerprint which enables our systems to identify those pictures, including in Gmail.

    It is important to remember that we only use this technology to identify child sexual abuse imagery, not other email content that could be associated with criminal activity (for example using email to plot a burglary).

    You know, like burglary.

    While some extreme privacy hawks may see no circumstance where email snooping, even if only by robots, is acceptable – I think the vast majority of reasonable people can get behind Google’s initiative to find child pornographers and help put them in jail.

    Image via Google

  • Google On Complexity Of ‘Right To Be Forgotten’

    As previously reported, Google (as well as Microsoft and Yahoo) attended a meeting last week with EU regulators to discuss the “right to be forgotten” ruling and the search engines’ approach to handling it.

    Each of the companies was given a questionnaire (via The New York Times), asking about various aspects of their practices related to complying with the ruling. Google’s has been made publicly available, and in it, the company discusses complications it faces.

    Asked about criteria used to balance the company’s own economic interest and/or the interest of the general public in having access to info versus the right of the data subject to have search results delisted, Google said:

    The core service of a search engine is to help users find the information they seek, and thus it is in a search engine’s general economic interest to provide the fastest, most comprehensive, and most relevant search results possible. Beyond that abstractconsideration, however, our economic interest does not have a practical or direct impact on the balancing of rights and interests when we consider a particular removal request.

    We must balance the privacy rights of the individual with interests that speak in favour of the accessibility of information including the public’s interest to access to information, as well as the webmaster’s right to distribute information. When evaluating requests, we will look at whether the search results in question include outdated or irrelevant information about the data subject, as well as whether there’s a public interest in the information.

    In reviewing a particular removal request, we will consider a number of specific criteria. These include the individual (for example, whether an individual is a public figure), the publisher of the information (for example, whether the link requested to be removed points to material published by a reputable news source or government website), and the nature of the information available via the link (for example, if it is political speech, if it was published by the data subject him- or herself, or if the information pertains to the data subject’s profession or a criminal conviction).

    Each criterion, the company continued, has its own “potential complications and challenges”. It then proceeded to list these examples:

    • It is deemed to be legitimate by some EU Member that their courts publish rulings that include the full names of the parties, while courts in other Member States anonymise their rulings before publication.
    • The Internet has lowered the barrier to entry for citizen journalists, making it more difficult to precisely define a reputable news source online than in print or broadcast media.
    • It can be difficult to draw the line between significant political speech and simple political activity, e.g. in a case where a person requests removal of photos of him- or herself picketing at a rally for a politically unpopular cause.

    As previously assessed, it’s a real mess.

    Google says in the document that it has not considered sharing delisted search results with other search engines, adding, “We would note that sharing the delisted URLs without further information about the request would not enable other search engine providers to make informed decisions about removals, but sharing this information along with details or a copy of the complaint itself would raise concerns about additional disclosure and data processing.”

    For some reason, I’m reminded of that time Google accused Bing of stealing its search results.

    You can read Google’s full questionnaire responses here.

    As of July 18th, Google had received over 91,000 removal requests involving over 328,000 URLs. Earlier this week, Google announced dates for presentations to its Advisory Council, aimed at evolving the public conversation and informing ongoing strategy.

    Image via Google

  • Google Announces ‘Right To Be Forgotten’ Tour 2014

    Google has released a schedule for presentations from “experts” on the “right to be forgotten,” which will take place throughout the fall. Consider it Google’s Right to be Forgotten Tour 2014 (I hope there are t-shirts).

    The company recently announced the formation of its Advisory Council on the subject, which stems from a ruling by the Court of Justice of the European Union, saying that search engines must provide people in the EU with a means of requesting content about them be removed from search results. You can get caught up on the whole mess here, but suffice it to say, it’s been a controversial battle between privacy and censorship. Many questions and concerns remain, which is precisely why Google is holding these “in-person public consultations”.

    The schedule is as follows:

    September 9 in Madrid, Spain
    September 10 in Rome, Italy
    September 25 in Paris, France
    September 30 in Warsaw, Poland
    October 14 in Berlin, Germany
    October 16 in London, UK
    November 4 in Brussels, Belgium

    “The Council welcomes position papers, research, and surveys in addition to other comments,” says Betsy Masiello, Google Secretariat to the Council. “We accept submissions in any official EU language. Though the Council will review comments on a rolling basis throughout the fall, it may not be possible to invite authors who submit after August 11 to present evidence at the public consultations.”

    There’s a form here, for those who wish to voice their concerns and be considered for presentation.

    Last week, EU regulators held a meeting with the search engines about the subject, where Google was said to disclose that it had removed over 50% of URLs requested, rejected over 30%, and requested additional info in 15% of cases. It had received requests from 91,000 people to remove 328,000 URLs just through 07/18.

    More on Google’s Advisory Council here.

    Image via Google

  • Google Reportedly Reveals ‘Right To Be Forgotten’ Stats

    EU regulators had that meeting with the search engines about the “right to be forgotten” ordeal on Thursday, and Google did indeed participate. There had initially been some question regarding whether or not Google would be in attendance.

    The Wall Street Journal has a source apparently with direct knowledge of what was discussed in the meeting, and shares some stats Google presented, which include:

    – It has removed over 50% of URLs requested.

    – It has rejected over 30%.

    – It has requested additional info for 15% of cases.

    – It has received requests from 91,000 people to remove 328,000 URLs just through 07/18.

    – 17,500 requests came from France, while 16,500 came from Germany, and 12,000 came from the UK.

    There’s no word on what kind of numbers Bing is seeing, though its tool has only been available since last week.

    Yahoo was reportedly also in attendance at the meeting, but it’s still unclear what the company’s status is in relation to the law.

    Image via Google

  • Google Faces Potential Fines, Criminal Charges In Italy [Report]

    Google continues to walk down a rocky road in Europe as it now faces fines and potential criminal charges if it does not meed requirements set by regulators in Italy.

    The company has been dealing with numerous regulatory issues throughout the continent. In addition to a lengthy antitrust probe, Germany mulling regulating the company like a utility, and that whole “right to be forgotten” mess, Google also continues to face concerns related to a privacy policy it implemented a couple years ago.

    The policy, which was essentially the consolidation of various Google products’ policies into a singular policy, enabling Google to use data from one of its services to the next, was implemented in 2012. While it’s been mostly accepted here in the U.S. by this point, it has remained a hot button issue throughout Europe, especially in France.

    Google has already been fined in France and Spain over the policy, and still faces action in the UK and Netherlands. Italy, it is being reported, has given Google 18 months to comply with its demands. Reuters reports:

    The Rome-based regulator said Google would not be allowed to use the data to profile users without their prior consent and would have to tell them explicitly that the profiling was being done for commercial purposes. It also demanded that requests from users with a Google account to delete their personal data be met in up to two months.

    A spokesman for Google said the company had always cooperated with the regulator and would continue to do so, adding it would carefully review the regulator’s decision before taking any further steps.

    Google, according to the report, has agreed to present the government with a document detailing its plan of action by the end of September. The company reportedly faces a fine of a million euros and possible criminal proceedings if it fails to comply.

    Image via Google

  • Netflix Toys Around With Private Viewing

    Netflix Toys Around With Private Viewing

    Netflix is reportedly testing a new feature that would enable users to hide the titles they’ve been watching for fear of embarrassment.

    Janko Roettgers at GigaOm shares this statement from Netflix’s Cliff Edwards:

    “At Netflix we continuously test new things. In this case, we are testing a feature in which a user watching a movie or TV show can choose to view in “Privacy Mode.” Choosing that option means the program will not appear in your viewing activity log, nor will it be used to determine recommendations about what you should watch in the future.”

    That bit about not using it for recommendations is probably as good a reason as any to provide users with such a feature. Even if you’re not trying to hide what you’ve been watching, you may want to keep some titles out of the pool of inspiration for what Netflix thinks you should watch in the future.

    Maybe you have a guest over and watch something that you’d otherwise never watch in a million years, but don’t necessarily want to bother with a different profile. Just watch it in private mode, and don’t worry about it polluting your own. Of course, that might create an awkward situation between you and your guest.

    Either way, Netflix might as well give people the option to view titles this way. It doesn’t seem like too much to ask for. It’s not as if they’re launching features all the time.

    Image via Netflix

  • Dakota Fanning Won’t Parade Personal Life

    Dakota Fanning is insistent upon the fact that she won’t parade her personal life in front of anyone’s cameras. And at times the Very Good Girls star is a bit miffed at how people think they know her personally. It’s probably not unlike many former child stars who have grown up on the big screen or on TV. Fans feel like they’re part of their own lives. Dakota feels passionately that they’re not.

    “Because people saw me grow up, there’s this weird sort of ownership that they feel for me and that is… difficult. Because it’s not real; it’s in their minds. People don’t know me as much as they think they do,” she said during a recent interview with Town and Country magazine. “I’ll be walking down the street and someone will say hello, and I’ll go, ‘Oh, hi!’ I’ll think I must know this person if they said hello, but then you realize, you don’t know them.”

    She also dished on her decision to thrust herself into the limelight–one say says was no one’s but her own.

    “I was an exceptionally mature child,” she said.

    That maturity is likely what makes her even more determined–at 20–to keep her private life to herself.

    “I’m just never going to parade my personal life. If you choose to not do it, it’s not hard to not do it,” she said. “Any part of an artistic business is made better by there being a little mystery. That’s what movies are about.”

    Fanning’s life in Very Good Girls is not so private. Her character and that of Elizabeth Olsen–best friends for years–make a pact to lose their virginity before heading off to college. The plot takes a twist when they both start seeing the same man. The film debuted at Sundance in January and is expected to be released in limited theaters across the U.S. later in July.

    Dakota Fanning’s younger sister Elle Fanning has made quite a name for herself in the acting world, too–most recently starring with Angelina Jolie in Maleficent. It’s uncertain if she plans to keep her private life as private as big sister Dakota’s but so far not much about it has hit the media–so it’s a safe bet she’ll share little to nothing with her fans, too.

    In the meantime Dakota Fanning is going full steam ahead with her film work. She is seeking roles depicting strong female characters and hopes to eventually try her hand at directing, too.

    “It’s very hard to find a movie about a strong woman – one that doesn’t have anything to do with a guy or the love of a guy or the heartbreak of a guy,” she said. “Is that the only crisis that women deal with: love and loss of love and sadness? There’s more to life than that.”

    Fanning played a role about love, lust, and female wiles in The Last of Robin Hood–an indie film that screened at the Toronto Film Festival.

    There’s clearly way more to life than cunningness and heartbreak in Dakota Fanning’s world–however by her own choosing her fans won’t be privy to much of it at all. Good for her for respecting her privacy and insisting that it remain that way with her fans as well.

    Image via YouTube

  • Try Facebook’s Creepy Mood Experiment on Yourself with New Chrome Extension

    Did you miss on on Facebook’s controversial 2012 mood experiment that altered the makeup of your news feed, hoping to make you really, really sad? Of course you did – as far as you know. You’re never going to know whether you were one of the 700,000 users randomly selected as Facebook’s guinea pigs, but since that’s less than one percent of the total Facebook user base, it’s likely that you missed it. Shucks!

    If you’re dying to know what it feels like to have Facebook screw with your emotions, you’re in luck. You can install and control your own Facebook Mood Manipulator, courtesy of a new Chrome extension.

    According to creator Lauren McCarthy, her mood manipulator is built on the Linguistic Inquiry Word Count, the exact same system that Facebook used in its now-probed experiment.

    “Facebook Mood Manipulator is a browser extension that lets you choose how you want to feel and filters your Facebook Feed accordingly,” says McCarthy. “Aw yes, we are all freaked about the ethics of the Facebook study. And then what? What implications does this finding have for what we might do with our technologies? What would you do with an interface to your emotions?”

    Well, now you can find out.

    The extension places a little box at the upper right-hand corner of your news feed. From there, you can control four different emotional elements with a slider – positive, emotional, aggressive, and open. I’m going to guess that you’ll have the most fun turning the ‘positive’ down and cranking all the rest up to 11. Angry Facebook friends are the best Facebook friends.

    So, try it out. F*ckin’ right, Facebook – I’ll manipulate my own mood.

    Images via Lauren McCarthy, Mood Manipulator and Thinkstock
    h/t The Next Web

  • Facebook’s Experiment Was ‘Poorly Communicated’ Says Sheryl Sandberg

    Facebook COO Sheryl Sandberg has apologized to Facebook users for the unannounced and just recently unearthed psychological experiment it conducted on a small number of users back in 2012. She says that the news feed experiment was “poorly communicated”.

    The experiment, which was revealed in a study called “Experimental Evidence Of Massive-Scale Emotional Contagion Through Social Networks”, has been denounced by many as Facebook crossing the line. A couple of years ago, Facebook researchers decided to manually alter the news feeds of around 700,000 users (a very small percentage of Facebook’s one billion+ user base) by showing them more or fewer positive or negative posts.

    The goal was to see if emotion was ‘contagious’ on social media, in other words, would seeing more negatively-themed status updates make users post more negative updates in return.

    The study produced fairly weak results.

    “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” said Sandberg. “And for that communication we apologize. We never meant to upset you.”

    “We take privacy and security at Facebook really seriously because that is something that allows people to share,” she continued.

    Although Facebook’s Terms of Service allow for “research”, the clause that covers this was added months after the experiment in question. To some, it was an unethical overreach by Facebook – one of many privacy violations from the decade-old company.

    One of the experiment’s authors has stated that the whole thing was about providing a better service to users.

    We’ll see if Sandberg’s statement has any effect on a new probe that the UK’s Information Commissioner’s Office has launched. They’re currently looking as what, if any, laws Facebook broke in the operations of the experiment.

    Image via Wikimedia Commons

  • Regulators Probe Facebook’s Emotion Experiment

    An experiment Facebook conducted with some of its users two years ago has been getting a lot of negative attention in recent days after a paper about it was published. The company basically took about 700,000 users, and tested the effects of showing them more positive or more negative posts in their News Feeds. The goal was to see how it affected users’ emotions (or at least the emotions conveyed in their own posts).

    Facebook has language in its terms of service, which indicate that it can use info for its own internal research, but it has come to light that this language was actually added after the test was conducted. Some people are outraged, and are calling Facebook’s practices unethical.

    Consumer Watchdog has publicly attacked the company (though this is pretty standard), and now regulators are taking a look at the situation.

    The Financial Times reports (registration required) that the Information Commissioner’s Office in the UK is now investigating the company, and that it said it’s too early to tell what part of the law the company may have broken (if any). According to the report, the ICO has the power to force a company to change its policies and levy fines of up to £500,000.

    Additionally, as Bloomberg reports, the Irish Data Protection Commissioner’s office has been in contact with the company, which is important to note as Facebook’s European headquarters are located in Ireland. That report includes a statement from a Facebook UK spokesperson:

    “It’s clear that people were upset by this study and we take responsibility for it. We want to do better in the future and are improving our process based on this feedback. The study was done with appropriate protections for people’s information and we are happy to answer any questions regulators may have.”

    Facebook’s Adam Kramer, who co-authored the study, previously offered an explanation in a Facebook post. COO Sheryl Sandberg also reportedly said that the company did a poor job of communicating about it.

    Image via YouTube

  • Actually, Facebook Changed Its Terms To Cover That Experiment After It Was Over

    The plot thickens.

    As you may know, it has come to light that Facebook ran an experiment with nearly 700,000 users in 2012, showing how it could manipulate emotions by showing users more positive or negative content in their News Feeds.

    As some have pointed out, Facebook’s terms say it can use users’ info “for internal operations, including troubleshooting, data analysis, testing, research, and service improvement,” with research being the keyword in this case. Only one problem: that wasn’t actually in the terms when Facebook carried out the experiment.

    Forbes points out that Facebook made changes to its data use policy four months after the experiment, and yes, that bit about research was one of those changes.

    So if you were already upset about Facebook’s little test, there’s some more fuel for the fire. For some reason, images of Mark Zuckerberg sweating bullets while being grilled about privacy on stage at the D8 conference are coming to mind.

    Facebook now has Consumer Watchdog on its back over the whole thing. The organization put out a press release calling Facebook’s research “unethical”.

    “There is a longstanding rule that research involving human subjects requires informed consent. The researchers clearly didn’t get it,” said John M. Simpson, Consumer Watchdog’s Privacy Project director, “Sleazy, unethical behavior is nothing new for Facebook, so I’m not really surprised they would do this. The academic researchers involved with the project and the National Academy of Sciences, which published the results, should be ashamed.”

    “Facebook’s TOS — like those of most Internet companies — are cleverly crafted by high-priced lawyers so as to be virtually indecipherable to the average user, but allow Facebook to do essentially whatever it wants commercially,” said Simpson. “It protects Facebook and its sleazy business practices, but it in no way provides the level of informed consent that is expected and required when doing research with human subjects.”

    Obviously that was before it came to light that the part about research wasn’t even in the ToS when the experiment was carried out.

    “Facebook has no ethics,” said Simpson. “They do what they want and what is expedient until their fingers are caught in the cookie jar. Like the rest of the tech giants, they then apologize, wait a bit and then try something new that’s likely to be even more outrageous and intrusive. Silicon Valley calls this innovation. I call it a compete disrespect for societal norms and customs.”

    Yes, the current outrage will no doubt die down within the week, and Facebook will carry on being Facebook. And Facebook users will carry on using Facebook.

    Image via YouTube

  • Facebook Fights ‘Unprecedented’ Data Grab

    Facebook says that they are currently fighting a “set of sweeping search warrants” in an effort to protect their users’ information – and so far it’s a fight that they’ve been losing.

    “Since last summer, we’ve been fighting hard against a set of sweeping search warrants issued by a court in New York that demanded we turn over nearly all data from the accounts of 381 people who use our service, including photos, private messages and other information. This unprecedented request is by far the largest we’ve ever received—by a magnitude of more than ten—and we have argued that it was unconstitutional from the start,” says Facebook Deputy General Counsel Chris Sonderby.

    “Of the 381 people whose accounts were the subject of these warrants, 62 were later charged in a disability fraud case. This means that no charges will be brought against more than 300 people whose data was sought by the government without prior notice to the people affected. The government also obtained gag orders that prohibited us from discussing this case and notifying any of the affected people until now.”

    According to the New York Times, the “sweeping warrants” came during the investigation into a fraud case involving retired police officers, firefighters, and other civil servants who’ve been charged with filing fake disability claims. The information obtained from Facebook was crucial to the investigation, as photos taken from the site showed “disabled” people, well, not acting very disabled.

    The Manhattan DA’s office says that multiple courts have already found Facebook’s protestations without merit.

    “This was a massive scheme involving as many as 1,000 people who defrauded the federal government of more than $400 million in benefits,” said a spokeswoman for the Manhattan DA Cyrus R. Vance Jr. “The defendants in this case repeatedly lied to the government about their mental, physical and social capabilities. Their Facebook accounts told a different story. A judge found there was probable cause to execute search warrants, and two courts have already found Facebook’s claims without merit.”

    That’s true, and Facebook said that they eventually complied with the data request only after they were denied in appeals court.

    But now, they’ve filed another in their “continuing efforts to invalidate these sweeping warrants and to force the government to return the data it has seized and retained.”

    This isn’t the first time that Facebook has pushed back against overbroad data requests. But this is the first time we’ve seen Facebook challenge the notion that they must comply with a warrant they deem in violation of their users’ Fourth Amendment rights.

    Facebook has always had a decent relationship with law enforcement – one that is cooperative enough to have been accused of being a bit too chummy. But in their latest fight, Facebook’s pretty clear that this sort of “overreaching legal request” goes way too far.

    “We believe search warrants for digital information should be specific and narrow in scope, just like warrants for physical evidence. These restrictions are critical to preventing overreaching legal requests and protecting people’s information,” says Sonderby.

  • Facebook Says It’s ‘Not Always Listening’ as Petition Against New Feature Nears 600,000 Signatures

    A lot of people, both Facebook users and otherwise, don’t really trust Facebook. So, when Facebook announced a new feature that can passively listen to users’ background activity in order to easily identify songs, TV shows, and movies for status-sharing purposes, it wasn’t surprising when it struck people as a bit creepy.

    Or as one petition put it, a “massive threat to our privacy.”

    Yes, May’s announcement (the actual feature goes live soon) ushered in a swift and ferocious response from those concerned about privacy. According to Facebook, users will always be in control of when the app is listening, and if/when they share what it hears with everyone else. Of course, the constant distrust of the company’s motivations led some users to focus on the nefarious ways Facebook could use that kind of technology.

    Would they listen to our conversations? Would they store all of that data? Would they sell it to the highest bidder?

    A petition was started, asking Facebook to respect user privacy and put the kibosh on plans to release the passive listening app feature. When we reported on the petition about a week ago, it had garnered a little over 200,000 signatures.

    As of today, it’s coming up on 600,000.

    “Facebook says the feature will be used for harmless things, like identifying the song or TV show playing in the background, but it actually has the ability to listen to everything — including your private conservations — and store it indefinitely,” says the petition. “Not only is this move just downright creepy, it’s also a massive threat to our privacy. This isn’t the first time Facebook has been criticized for breaching our right to privacy, and it’s hoping this feature will fly under the radar. No such luck for Facebook. If we act now, we can stop Facebook in its tracks before it has a chance to release the feature.”

    Facebook has now spoken out about users’ concerns, looking to quash the so-called “myths” that have taken root online.

    “The microphone doesn’t turn itself on, it will ask for permission. It’s not always listening…so it’s very limited in what it is sampling,” Facebook Security Infrastructure head Gregg Stefancik told CNET. “I wouldn’t want this in my pocket either if it was recording everything going on around me.”

    He went on to explain exactly what’s happening when Facebook “matches” an audio clip it hears with one in its database.

    “If there’s a match, we return what the match is to the user [and] give them the option of posting the match. The user is in complete control and the audio fingerprint that we’ve received is disposed of immediately. The raw audio never leaves the phone and the data about the match is only stored if you choose to post it,” he said.

    He did clarify that if Facebook matches a sound with a song or TV show and you chooses not to share it, Facebook will keep a tally of that match in order to “keep a chart of the most watched and listened to song and shows” – but it won’t be tied to your personal profile.

    Satisfied? If you signed that petition, I’m guessing the answer is no.

    Image via YouTube

  • Hundreds of Thousands Petition Facebook to Abandon Creepy Passive Listening Feature

    Given the fact that a large amount of people–users and non-users alike–have a severe distrust of Facebook and their intentions, it’s not that surprising that hundreds of thousands of people have already signed a petition asking that the largest social network in the world please kindly refrain from listening to users’ conversations.

    Ok, that might be a little misleading.

    What people want is for Facebook to cancel their plans to release a new app feature that passively listens to users’ background activity in order to identify songs, TV shows, movies, and more to help with easier status sharing. Or, you know, possibly eavesdrop on your most intimate conversations, store that data, and sell it to the highest bidder.

    What we have here is a classic case of I don’t believe a goddamned word you say, as Facebook is pretty clear about what the new features does, and maybe more importantly, what it doesn’t do.

    First, let’s look at what Facebook says it does:

    When writing a status update – if you choose to turn the feature on – you’ll have the option to use your phone’s microphone to identify what song is playing or what show or movie is on TV.

    That means if you want to share that you’re listening to your favorite Beyoncé track or watching the season premiere of Game of Thrones, you can do it quickly and easily, without typing.

    As for what it doesn’t do, Facebook says that not only is no sound stored, but it can’t even understand background noise like conversations–only movies, music, and TV shows. It’s kind of like Shazam, but with more sharing.

    It seems the creators of a fast-moving petition on the site Sum Of Us take issue with that last part.

    “Facebook says the feature will be used for harmless things, like identifying the song or TV show playing in the background, but it actually has the ability to listen to everything — including your private conservations — and store it indefinitely,” says the petition. “Not only is this move just downright creepy, it’s also a massive threat to our privacy. This isn’t the first time Facebook has been criticized for breaching our right to privacy, and it’s hoping this feature will fly under the radar. No such luck for Facebook. If we act now, we can stop Facebook in its tracks before it has a chance to release the feature.”

    The petition continues…

    “Facebook says it’ll be responsible with this feature, but we know we can’t trust it. After all, just a few months ago Facebook came under fire for receiving millions of dollars for working with the National Security Agency’s PRISM, a wide-scale and highly controversial public electronic data surveillance program — something its CEO Mark Zuckerberg initially denied…”

    Still denies, actually.

    The petition currently has about 235,000 signatures out of a necessary 250,000. At the rate it’s moving, it should hit its threshold by the end of the day, thanks to social media and a nice, warm reddit hug.

    The feature is set to land on both iOS and Android in the next few weeks.

    Image via YouTube

  • Facebook Privacy Default Now ‘Friends’ for New Users, the Rest of Us Get the Privacy Dinosaur

    Facebook has just announced a small change to new user privacy and an new initiative for old user privacy that at its core, wants to make sure that people are sharing everything with the correct audience. You know, like only telling your friends that you’ll be out of town all weekend, not the general public.

    If you are 18 years old or older, the default privacy of your first post after joining Facebook has been set to ‘Public’ (for teens it’s ‘Friends’ by default but with a ‘Public option). Today, that changes. Facebook is doing the right thing and switching the default post setting for new users to just ‘Friends’.

    “On Facebook you can share whatever you want with whomever you want, from a one-to-one conversation, to friends or to everyone. While some people want to post to everyone, others have told us that they are more comfortable sharing with a smaller group, like just their friends. We recognize that it is much worse for someone to accidentally share with everyone when they actually meant to share just with friends, compared with the reverse. So, going forward, when new people join Facebook, the default audience of their first post will be set to Friends. Previously, for most people, it was set to Public,” says the company in a release.

    Facebook: “oversharing is worse than undersharing.” Quick, print that on a t-shirt!

    Of course, old Facebook users need a crash course in oversharing as well. To this end, Facebook will be launching an updated and expanded version of the privacy checkup tool, otherwise known as the Privacy Dinosaur which appeared on the site in late March.

    The privacy dino jumps in and asks people to confirm their posting audience. “Sorry to interrupt. You haven’t changed who can see your posts lately, so we just wanted to make sure you’re sharing this post with the right audience. (Your current setting is _______, though you can change this whenever you post)” it says.

    Now, Facebook is introducing what looks like a quick privacy tutorial (seen above) that teaches all users about sharing settings. Honestly, who couldn’t use a refresher on privacy settings? Imagine all the relationships, careers, and families that could be saved by people knowing who’s hearing the crap they’re saying on Facebook.

    At least it’ll be nice to know who you’re sharing with when Facebook decides to eavesdrop on you.

  • Post Snowden, Tech Companies Are Much More Transparent and Protective of User Privacy

    In May of 2013, the Electronic Frontier Foundation published their third-ever “Who Has Your Back” report, which looks at major tech companies and how they stack up when it comes to protecting user data and privacy. In the six criteria the EFF uses to judge each company, only two received perfect six-star ratings. Many top companies, like Apple and Yahoo, only received one measly star out of six. It was clear that many of the companies people trust with their most personal information were dropping the ball when it came to protecting it from prying eyes, as well as letting users know when the government came a-pryin’.

    Then something big happened. About a month after that report hit the internet, a journalist named Glenn Greenwald published documents given to him by one Edward Snowden, a former contractor for the NSA. The documents detailed a massive surveillance initiative that saw the U.S. government collecting troves of data on American citizens (and some abroad), and even suggested that some of the same tech companies in the EFF’s report had been a party to the spying.

    These revelations, along with the many that came after, caused quite the stir and ignited a heated debate over privacy, data security, government overreach, and national safety interests. People became more aware of the potential for companies to play fast and loose with their personal data, and companies were forced to shift policies in order to regain users’ trust.

    Or at least that’s the picture that the EFF’s new Who Has Your Back report is painting.

    In the 2014 report, nine companies received perfect six-star ratings when it comes to protecting user privacy: Apple, Credo Mobile, Dropbox, Facebook, Google, Microsoft, Sonic.net, Twitter, and Yahoo. Last year, both Apple and Yahoo only received one star, Facebook had received three, and Google had five. The only two companies that had perfect ratings in 2013 both kept their perfect scores this year: Sonic.net and Twitter.

    So, what are the stars for? The EFF’s criteria consists of six things: Does the company require a warrant for content; Does the company tell users about government data requests; Does the company publish a transparency report? Does the company publish law enforcement guidelines; Does the company fight for users’ rights in the courts; and Does the company fight for users’ privacy rights in Congress.

    For the more visually inclined, here’s a comparison of 2013 and 2014’s star charts. It’s clear to see that there is significantly more gold in 2014.

    2013

    2014

    For the first time in the history of the report, all companies are at least doing one thing to protect user privacy. The big blemishes on 2014’s list are major telecoms AT&T and Comcast (no surprises there), Amazon.com, and newcomer Snapchat–who the EFF urges to step it up.

    “Snapchat stands out in this report: added for the first time this year, it earns recognition in only one category, publishing law enforcement guidelines. This is particularly troubling because Snapchat collects extremely sensitive user data, including potentially compromising photographs of users. Given the large number of users and nonusers whose photos end up on Snapchat, Snapchat should publicly commit to requiring a warrant before turning over the content of its users’ communications to law enforcement. We urge them to change course,” they say.

    To answer the question of why the big change (for most major companies at least), the EFF gives credit to the Edward Snowden leaks, which they say prompted “significant policy reform” from major tech companies.

    “These changes in policy were likely a reaction to the releases of the last year, which repeatedly pointed to a close relationship between tech companies and the National Security Agency. Tech companies have had to work to regain the trust of users concerned that the US government was accessing data they stored in the cloud. This seems to be one of the legacies of the Snowden disclosures: the new transparency around mass surveillance has prompted significant policy reforms by major tech companies.”

    And it’s really been transparency that’s had the most focus in the post-Snowden era. Many companies saw the publishing of a data request transparency report as a way to say “look, we’re not trying to hide anything from you.” As the EFF notes, even major ISPs like AT&T, Comcast, and Verizon now publish transparency reports.

    You can check out the EFF’s incredibly detailed report of each company featured on the list here.

    Images via EFF

  • Google Glass Champion Has Realization at Skrillex Show

    Sudden clarity Robert Scoble had a realization at a Skrillex show recently. Robert Scoble was at a Skrillex show? Yes, it was Coachella. Oh, ok. Proceed.

    Google has a serious problem when it comes to Google Glass. Check out his musings:

    Let’s all remember that Robert Scoble was one of Google Glass’ early adopters and champions. He’ll wear them in a public restroom. Shits aren’t given.

    But now he’s not wearing them as much anymore. He says that most Google employees he knows aren’t even wearing them that much anymore.

    “It’s not shocking to me that most Google employees I know that have them aren’t wearing them around anymore and that many in the community are grumbling behind the scenes (most won’t write about their concerns because they need to have good relationships with Google),” he says in the comments of that same Facebook post.

    He definitely has a point in that Google Glass basically does the same thing a GoPro does. But why then does Glass feel creepy and intrusive? Why are Google Glass users “Glassholes” but people wearing GoPros on their heads at concerts aren’t “GoPricks?”

    Whatever the reason (I’m not sure if it’s a lack of “empathy”), Google has done something wrong and something has to change. Stuff like this isn’t helping.

    And if you’re raging at a Skrillex show, I can guarantee that you want no part of someone with a camera attached to their face.

    Image via Robert Scoble, Google+

  • FTC to Facebook: Don’t Screw with User Privacy After WhatsApp Acquisition

    Today, Facebook Chief Privacy Officer Erin Egan and WhatsApp General Counsel Anne Hoge received a diplomatic, but pointed letter from Jessica Rich, the Federal Trade Commussion’s Bureau of Consumer Protection head.

    And the TL;DR version is this: don’t screw with user privacy when this acquisition goes through (and if you do, you have to let them know and make it opt-in).

    “WhatsApp has made a number of promises about the limited nature of the data it collects, maintains, and shares with third parties–promises that exceed the protections currently promised to Facebook users. We want to make clear that regardless of the acquisition, WhatsApp must continue to honor these promises to consumers. Further, if the acquisition is completed and WhatsApp fails to honor these promises, both companies could be in violation of Section 5 of the Federal Trade Commission (FTC) Act and, potentially, the FTC’s order against Facebook,” said Rich.

    What the FTC is referencing with that last bit is a 2011 settlement with Facebook, after the commission charged the social network with deceiving consumers by failing to keep privacy promises.

    The FTC listed more that a half-dozen individual instances where Facebook dropped the ball on privacy promises, and in the end ruled that Facebook be required to give users “clear and prominent” notice get permission before data is shared beyond the privacy settings that have already been established.

    Facebook also agreed to biannual independent audits of its privacy practices for the next two decades. Mark Zuckerberg’s company dodged a fine.

    Of course, Facebook and WhatsApp have already promised users that nothing is going to change with the acquisition.

    “Here’s what will change for you, our users: nothing…And you can still count on absolutely no ads interrupting your communication. There would have been no partnership between our two companies if we had to compromise on the core principles that will always define our company, our vision and our product,” they said at the time of the announcement.

    Just last month, in an attempt to further assuage the WhatsApp user base, CEO Jan Koum promised that the service will still protect their privacy, even with Facebook as overlords.

    “Respect for your privacy is coded into our DNA, and we built WhatsApp around the goal of knowing as little about you as possible: You don’t have to give us your name and we don’t ask for your email address. We don’t know your birthday. We don’t know your home address. We don’t know where you work. We don’t know your likes, what you search for on the internet or collect your GPS location. None of that data has ever been collected and stored by WhatsApp, and we really have no plans to change that,” he said.

    Rich warns Facebook that any shifts in policy must be telegraphed, or else the company could be in violation of their 2011 settlement.

    “Before changing WhatsApp’s privacy practices in connection with, or following, any acquisition, you must take steps to ensure that you are not in violation of the law or the FTC’s order. First, if you choose to use data collected by WhatsApp in a manner that is materially inconsistent with the promises WhatsApp made at the time of collection, you must obtain consumers’ affirmative consent before doing so. Second, you must not misrepresent in any manner the extent to which you maintain, or plan to maintain, the privacy or security of WhatsApp user data,” she says in the letter.

    The FTC’s skepticism likely mirrors that of many a WhatsApp user.

    Image via Wikimedia Commons

  • Utah Revenge Porn Ban Signed into Law

    Utah has just become the latest state to pass laws specifically banning “distributing intimate images of a person without that person’s permission.” These bills are generally drafted to tackle so-called revenge porn.

    Governor Gary R. Herbert signed the bill, HB71, into law on Monday. The bill passed both state legislatures last month. It modifies existing Utah criminal code to make the distribution of intimate images a class A misdemeanor on the first offense, with subsequent charges deemed third-degree felonies.

    Here’s the meat of the new law:

    An actor commits the offense of distribution of intimate images if the actor, with the intent to cause emotional distress or harm, knowingly or intentionally distributes to any third party any intimate image of an individual who is 18 years of age or older, if: the actor knows that the depicted individual has not given consent to the actor to distribute the intimate image; the intimate image was created by or provided to the actor under circumstances in which the individual has a reasonable expectation of privacy; and actual emotional distress or harm is caused to the person as a result of the distribution under this section

    The new law also has built-in exemptions for law enforcement and others acting lawfully in the reporting of a criminal offense. HB71 also says that the new regulations do not apply to ISPs and other “information services” as long as their only part in the distribution of the unlawful images comes from “transmitting or routing data from one person to another” or “providing a connection between one person and another person.”

    Of course, this is an important distinction. Utah’s new law clearly wants to go after the actual distributors of the images (the jilted ex-lovers, if you will) instead of ISPs and websites.

    As we’ve discussed before, criminalizing revenge porn doesn’t just affect the “jilted ex-lovers” who post it online, but also revenge porn websites and web hosting companies. The latter tend to be able to stand behind the Communication Decency Act, which protects websites from being liable for user-submitted content. Utah’s new law at least attempts to make a distinction.

    Utah becomes just one of a few states to enact laws banning revenge porn. California recently passed their own laws against the practice and New York is currently debating similar statutes.

    And though anti-revenge porn laws have so far been a state undertaking, it looks like the federal government is about to get involved.

    Image via Thinkstock