WebProNews

Tag: Privacy

  • 500 Chrome Extensions Caught Uploading Private Data

    500 Chrome Extensions Caught Uploading Private Data

    Independent research Jamila Kaya, in cooperation with Cisco-owned Duo Security, helped uncover approximately 500 Chrome extensions that were uploading private data from millions of users.

    Kaya used Duo Security’s CRXcavator—an automated tool designed specifically to help assess Chrome extensions— to “uncover a large scale campaign of copycat Chrome extensions that infected users and exfiltrated data through malvertising while attempting to evade fraud detection on the Google Chrome Web Store.” Initially, Kaya discovered 70 malicious extensions being used by 1.7 million users. Kaya and Duo Security notified Google, who went on to find an additional 430 similar extensions.

    “In the case reported here, the Chrome extension creators had specifically made extensions that obfuscated the underlying advertising functionality from users,” wrote Kaya and Duo Security’s Jacob Rickerd. “This was done in order to connect the browser clients to a command and control architecture, exfiltrate private browsing data without the users knowledge, expose the user to risk of exploit through advertising streams, and attempt to evade the Chrome Web Store’s fraud detection mechanisms.”

    Google quickly removed all 500 extensions, and implemented new policies to make it harder for these type of extensions to reappear. As Duo Security recommends, individuals should periodically review the extensions they’re using and delete any they don’t recognize or no longer use.

  • WhatsApp Now Has Two Billion Users

    WhatsApp Now Has Two Billion Users

    Facebook-owned WhatsApp achieved a significant milestone, officially crossing the two billion user threshold.

    WhatsApp is the most popular messaging app on the planet and is a primary means of electronic communication in many countries. In addition to being cross-platform, the app supports audio and video calls, text and voice messages, file sharing and more. Significantly, the app supports end-to-end encryption, making it a vital element for many journalists and individuals who live under oppressive regimes.

    Not surprisingly, Facebook’s announcement regarding its user base focused heavily on the privacy aspects of the app. After acknowledging that the more people use the app, the more important it is to keep it secure, Facebook touted its commitment to continuing its strong stance on security and encryption.

    “That is why every private message sent using WhatsApp is secured with end-to-end encryption by default. Strong encryption acts like an unbreakable digital lock that keeps the information you send over WhatsApp secure, helping protect you from hackers and criminals. Messages are only kept on your phone, and no one in between can read your messages or listen to your calls, not even us. Your private conversations stay between you.

    “Strong encryption is a necessity in modern life. We will not compromise on security because that would make people less safe. For even more protection, we work with top security experts, employ industry leading technology to stop misuse as well as provide controls and ways to report issues — without sacrificing privacy.”

    As the war on privacy continues, it’s reassuring that one of the most widely used services remains more committed than ever to supporting strong encryption in an effort to protect its users.

  • Bad News For Uber: L.A. Wins Data-Sharing Appeal

    Bad News For Uber: L.A. Wins Data-Sharing Appeal

    Uber and Los Angeles have been fighting over a rule ordering scooter rental companies to share ride data with the city—a rule that was just upheld on appeal, according to the Los Angeles Times.

    The Los Angeles Department of Transportation (LADOT) passed a rule requiring scooter and electric bike sharing services to provide real-time data on riders’ trips, including start and end points, as well as the full route traveled.

    Uber has argued that providing that degree of data would unnecessarily risk riders’ privacy and make it all too easy to personally identify individual riders, and “reveal personal information about riders, including where they live, work, socialize or worship,” according the LA Times. After six months of arguing, the city suspended Uber’s operating license.

    Uber filed an appealed, which was heard “by David B. Shapiro, a lawyer who has handled appeals for multiple city departments, including the Los Angeles Fire Department and the Department of Cannabis Regulation.”

    Although Shapiro sided with the city in saying it was within its rights to terminate Uber’s operating permit, he said both sides had made weak arguments. Uber failed to provide examples of data being used improperly, while Shapiro did acknowledge Uber’s concerns. At the same time, LADOT failed to make a compelling case as to how it could use real-time data to solve the problems it says are the reason for the rule. Uber has already said it is willing to provide near-real-time, aggregated data that would protect privacy.

    Shapiro’s decision is a loss for privacy advocates and concerned citizens, but Uber has already promised to appeal.

  • Avast Caught Selling Detailed Browsing History to Marketers

    Avast Caught Selling Detailed Browsing History to Marketers

    Another day, another company abusing customer privacy. A joint investigation by PCMag and Motherboard has discovered that antivirus maker Avast, who also owns AVG, has been selling extremely detailed information about customer browsing histories to marketers.

    The company division responsible is Jumpshot, and it has “been offering access to user traffic from 100 million devices.” In a tweet the company sent last month to attract new clients, it promised to deliver “‘Every search. Every click. Every buy. On every site’ [emphasis Jumpshot’s,]” according to Motherboard.

    In fact, the level of detail the data provides is astounding, allowing clients to “view the individual clicks users are making on their browsing sessions, including the time down to the millisecond. And while the collected data is never linked to a person’s name, email or IP address, each user history is nevertheless assigned to an identifier called the device ID, which will persist unless the user uninstalls the Avast antivirus product.”

    The data is anonymized so that, in theory, it can’t be tied to an individual user. However, the device ID is where the trouble comes in. For example, all a retailer would need to do is compare the time stamp that correlates to a specific purchase against their records to identify the customer. It would then be a simple matter to use that device ID to build a complete—and completely identifiable—profile of that person. With their entire browsing history, the retailer would know everything about what sites they visit, their habits, what their interests are and who their friends are.

    According to PCMag, Jumpstart even offered different products tailored to delivering different subsets of information. For example, one product focused on search results, both the terms searched for and the results visited. Another product focused on tracking what videos people are watching on Facebook, Instagram and YouTube.

    The granularity is particularly disturbing in relation to a contract Jumpstart had with marketing provider Omnicom Media Group, to provide them the “All Clicks Feed.” The service provides “the URL string to each site visited, the referring URL, the timestamps down to the millisecond, along with the suspected age and gender of the user, which can inferred based on what sites the person is visiting.” While the device ID was stripped from the data for most companies that signed up for the All Clicks Feed, Omnicom Media Group was the exception, receiving the data with device IDs intact.

    Much of the collection occurred through the antivirus software’s browser extensions, and Avast has since stopped sharing the data it collects through those extensions. However, the company has not committed to delete the data it has already collected. The company can also still collect browsing history through its Avast and AVG antivirus software, on both desktop and mobile.

    That ambiguity has not gone over well with Senator Ron Wyden, a staunch privacy advocate. According to both PCMag and Motherboard, Wyden said in a statement that “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”

    The full read at either PCMag or Motherboard is fascinating and is another good reminder that nothing in life is free. Companies that offer a ‘free service’ are making their money somewhere—often at the expense of the customer.

  • Google Accidentally Sent Video Backups to Strangers

    Google Accidentally Sent Video Backups to Strangers

    Google has admitted it accidentally sent videos from its Google Takeout archive service to the wrong users, according to 9to5Google.

    Google Takeout is a service that lets users download all their data to migrate to another service, or merely to use as a backup. According to the report, however, Google accidentally downloaded and saved some users’ videos to the wrong archives, essentially sending them to complete strangers.

    Google provided 9to5Google with the following statement:

    “We are notifying people about a bug that may have affected users who used Google Takeout to export their Google Photos content between November 21 and November 25. These users may have received either an incomplete archive, or videos—not photos—that were not theirs. We fixed the underlying issue and have conducted an in-depth analysis to help prevent this from ever happening again. We are very sorry this happened.”

    The company asks users who were impacted to delete their previous export and request another one.

  • Ring Uses Android Doorbell App to Surveil Customers

    Ring Uses Android Doorbell App to Surveil Customers

    The Electronic Frontier Foundation (EFF) has discovered that Ring’s Android doorbell camera app is being used to surveil customers.

    According to the EFF, the Ring Android app is “packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.”

    Specifically, the data is shared with Branch, AppsFlyer, MixPanel and Google’s Crashalytics. EFF’s investigation was able to uncover what data was being sent to each entity.

    Branch is a “deep linking” platform that receives several unique identifiers, “as well as your device’s local IP address, model, screen resolution, and DPI.”

    AppsFlyer is “a big data company focused on the mobile platform,” and receives information that includes unique identifiers, when Ring was installed, interactions with the “Neighbors” section and more. Even worse, AppsFlyer “receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.”

    MixPanel receives the most information, including “users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in.”

    It’s unknown what data is sent to Crashalytics, although it’s likely that’s the most benign of the data-sharing partnerships.

    The worst part is that, while all of these companies are listed in Ring’s third-party services list, the amount of data collection is not. As a result, there is no way for a customer to know how much data is being collected or what is being done with it, let alone have the option to opt out of it.

    Ring has been in the news recently for several high-profile security issues, including its cameras being hacked and a VICE investigation revealing an abysmal lack of basic security features. While both of these can be chalked up to errors or incompetence, this latest discovery is deeply disturbing because it speaks to how Ring is designed to function—namely as a way for the company to profit off of surveilling its own customers.

  • NJ Bans Clearview; Company Faces Potential Class-Action

    NJ Bans Clearview; Company Faces Potential Class-Action

    Facial recognition firm Clearview AI is facing a potential class-action lawsuit, while simultaneously being banned from being used by NJ police, according to separate reports by the New York Times (NYT) and CNET.

    The NYT is reporting that Clearview has found itself in hot water with the New Jersey attorney general over its main promotional video it was running on its website. The video showed Attorney General and two state troopers at a press conference detailing an operation to apprehend 19 men accused of trying to lure children for sex, an operation that Clearview took at least partial responsibility for.

    Mr. Grewal was not impressed with Clearview using his likeness in its promotional material, or in the potential legal and ethical issues the service poses.

    “Until this week, I had not heard of Clearview AI,” Mr. Grewal said in an interview. “I was troubled. The reporting raised questions about data privacy, about cybersecurity, about law enforcement security, about the integrity of our investigations.”

    Mr. Grewal was also concerned about the company sharing details of ongoing investigations.

    “I was surprised they used my image and the office to promote the product online,” Mr. Grewal continued, while also acknowledging that Clearview had been used to identify one of the suspects. “I was troubled they were sharing information about ongoing criminal prosecutions.”

    As a result of his concerns, Mr. Grewal has told state prosecutors in NJ’s 21 counties that police should not use the app.

    At the same time, CNET is reporting an individual has filed a lawsuit in the US District Court for the Northern District of Illinois East Division and is seeking class-action status.

    “Without obtaining any consent and without notice, Defendant Clearview used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever,” alleges the complaint. “Clearview used artificial intelligence algorithms to scan the facial geometry of each individual depicted in the images, a technique that violates multiple privacy laws.”

    It was only a matter of time before Clearview faced the fallout from its actions. It appears that fallout is happening sooner rather than later.

  • Troubles Mount For Clearview AI, Facial Recognition Firm

    Troubles Mount For Clearview AI, Facial Recognition Firm

    According to a report by The Verge, Clearview AI is facing challenges to both its credibility and the legality of the service it provides.

    On the heels of reports, originally covered by the New York Times, that Clearview AI has amassed more than three billion photos, scraped from social media platforms and millions of websites—and has incurred Twitter’s ire in the process—it appears the company has not been honest about its background, capabilities or the extent of its successes.

    A BuzzFeed report points out that Clearview AI’s predecessor program, Smartcheckr, was specifically marketed as being able to “provide voter ad microtargeting and ‘extreme opposition research’ to Paul Nehlen, a white nationalist who was running on an extremist platform to fill the Wisconsin congressional seat of the departing speaker of the House, Paul Ryan.”

    Further hurting the company’s credibility is an example it uses in its marketing, about an alleged terrorist that was apprehended in New York City after causing panic by disguising rice cookers as bombs. The company cites the case as one of thousands of instances in which it has aided law enforcement. The only problem is that the NYPD said they did not use Clearview in that case.

    “The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a spokesperson for the NYPD told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

    That last statement, regarding “lawfully possessed arrest photos,” is particularly stinging as the company is beginning to face legal pushback over its activities.

    New York Times journalist Kashmir Hill, who originally broke the story, cited the example of asking police officers she was interviewing to run her face through Clearview’s database. “And that’s when things got kooky,” Hill writes. “The officers said there were no results — which seemed strange because I have a lot of photos online — and later told me that the company called them after they ran my photo to tell them they shouldn’t speak to the media. The company wasn’t talking to me, but it was tracking who I was talking to.”

    Needless to say, such an Orwellian use of the technology is not sitting well with some lawmakers. According to The Verge, members of Congress are beginning to voice concerns, with Senator Ed Markey sending a letter to Clearview founder Ton-That demanding answers.

    “The ways in which this technology could be weaponized are vast and disturbing. Using Clearview’s technology, a criminal could easily find out where someone walking down the street lives or works. A foreign adversary could quickly gather information about targeted individuals for blackmail purposes,” writes Markey. “Clearview’s product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.”

    The Verge also cites a recent Twitter post by Senator Ron Wyden, one of the staunchest supporters of individual privacy, in which he comments on the above disturbing instance of Clearview monitoring Ms. Hill’s interactions with police officers.

    “It’s extremely troubling that this company may have monitored usage specifically to tamp down questions from journalists about the legality of their app. Everyday we witness a growing need for strong federal laws to protect Americans’ privacy.”

    —Ron Wyden (@RonWyden) January 19, 2020

    Ultimately, Clearview may well provide the impetus for lawmakers to craft a comprehensive, national-level privacy law, something even tech CEOs are calling for.

  • The Company That Can End Privacy Just Ran Afoul of Twitter

    The Company That Can End Privacy Just Ran Afoul of Twitter

    Clearview AI, the company that made headlines last week for potentially ending privacy as we know it, has incurred the wrath of Twitter, according to The Seattle Times.

    New York Times journalist Kashmir Hill first reported on Clearview AI, a small, little-known startup that allows you to upload a photo and then compare it against a database of more than three billion photos the company has amassed. Clearview’s system will then show you “public photos of that person, along with links to where those photos appeared.”

    Clearview has built its database by scraping Twitter, Facebook, YouTube, Venmo and millions of other websites for photos of people, something that is blatantly against most companies’ terms of service. The database is so far beyond anything the government has that some 600 law enforcement agencies have begun using Clearview—without any public scrutiny or a legislative stance on the legality of what Clearview does.

    To make matters even worse, once a person’s photos or social media profile has been scraped and added to the database, there is currently no way to have the company remove it. The only recourse available to individuals is to change the privacy settings of their social media profiles to prevent search engines from accessing them. This will stop Clearview from scraping any additional photos from their profile but, again, it does nothing to address any photos they may already have.

    At least one company is taking a strong stand against Clearview, namely Twitter. The Seattle Times is reporting that Twitter has sent Clearview a cease-and-desist demanding it stop scraping their site and user profiles for “any reason.” The cease-and-desist further demands that Clearview delete any and all data it has already collected from Twitter.

    Clearview is a prime example of what Alphabet CEO Sundar Pichai was talking about, in an op-ed he published in the Financial Times, when he said tech companies needed to take responsibility for the technology they create, not just charge ahead because they can. Similarly, Salesforce co-CEO Keith Block recently said the U.S. needed a national privacy law similar to the EU’s GDPR. If Clearview doesn’t make a case for such regulation…nothing will.

    In the meantime, here’s to hoping every other company and website Clearview has scraped for photos takes as strong a stance as Twitter.

  • Salesforce Co-CEO Says U.S. Needs National Privacy Law

    Salesforce Co-CEO Says U.S. Needs National Privacy Law

    Salesforce co-CEO Keith Block has come out in favor of a national privacy law, according to CNBC.

    Privacy is becoming one of the biggest battlegrounds for companies, governments and individuals alike. The U.S., however, does not have a comprehensive privacy law to outline what companies can and cannot do with individual data, or what rights individuals have to protect their privacy.

    In contrast, the European Union’s (EU) General Data Protection Regulation (GDPR) went into effect in 2018 and provides comprehensive privacy protections and gives consumers rights over their own data. Similarly, the California Consumer Privacy Act (CCPA) went into effect January 1, and provides similar protections. Although companies, such as Microsoft and Mozilla, have expanded GDPR and CCPA protections to all of their customers, there are far more companies that have not, and have no intention of doing so.

    At a panel discussion at the World Economic Forum (WEF), Keith Block said the U.S. needs its own version of the GDPR.

    “You have to applaud, for example, the European Union for coming up with GDPR and hopefully there will be a GDPR 2.0,” said Block.

    “There is no question there needs to be some sort of regulation in the United States. It would be terrific if we had a national data privacy law; instead we have privacy by zipcode, which is not a good outcome,” he said.

    As the issue continues to impact individuals and organizations, it will be interesting to see if the U.S. follow’s the EU’s lead.

  • Judges Orders Facebook To Hand Over Data About Possible Privacy Issues

    Judges Orders Facebook To Hand Over Data About Possible Privacy Issues

    According to The Wall Street Journal, “a Massachusetts judge has ordered Facebook to turn over data about thousands of apps that may have mishandled its users’ personal information.”

    In the wake of the Cambridge Analytica scandal, Facebook has faced ongoing scrutiny and lawsuits related to how it handles user data. The U.S. Federal Trade Commission fined the social media giant $5 billion for its role in Cambridge Analytica. More recently, Brazil levied a $1.6 million fine on the company for the same thing.

    The most recent decision stems from Facebook’s own “admission last year that it had suspended ‘tens of thousands of apps for possible privacy violations.” Unfortunately, that was all Facebook was willing to admit to, providing neither the specific apps that were suspended, nor the alleged violations they were guilty of. As a company that has long since lost the trust of many customers and lawmakers, Facebook’s protestations that it shouldn’t be forced to turn over the data fell on deaf ears. Now the Suffolk Superior Court judge has given the company 90 days to turn over the data.

    “We are pleased that the Court ordered Facebook to tell our office which other app developers may have engaged in conduct like Cambridge Analytica,” Massachusetts Attorney General Maura Healey said in a statement.

    Facebook says it is reviewing its options and may appeal the ruling.

  • EU Considering A Five-Year Ban On Facial Recognition In Public

    EU Considering A Five-Year Ban On Facial Recognition In Public

    Politico is reporting that the European Union (EU) is considering banning facial recognition in public areas for up to five years.

    Facial recognition is quickly becoming the latest battleground in the fight over user privacy. Some countries, such as China, have embraced the technology and taken surveillance of citizens to an all-new level. The U.S. has waffled back and forth, rolling out facial recognition in sensitive areas—such as airports—but often making participation optional. However, the Department of Homeland Security recently made headlines with a proposal that would expand facial recognition checks at airports, making them mandatory for citizens and foreigners alike.

    The EU, however, may be preparing to take the strongest stand against facial recognition and toward protecting individual privacy. According to a draft document Politico obtained, the EU is looking to expand its already rigorous privacy laws with a “future regulatory framework could go further and include a time-limited ban on the use of facial recognition technology in public spaces.”

    The ban would cover facial recognition use by both public and private entities.

    “This would mean that the use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g. 3-5 years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed,” adds the document.

    As the debate about facial recognition continues, it will be interesting to see where the U.S. lands: whether it will emphasize protecting individual privacy like the EU, or emphasize surveillance like China.

  • Facebook Backtracks On Ads In WhatsApp

    Facebook Backtracks On Ads In WhatsApp

    More than a year after WhatsApp’s founders resigned in protest, Facebook is backtracking on its plans to include ads in the messaging app, according to The Wall Street Journal.

    WhatsApp’s founders, Jan Koum and Brian Acton, were so strongly opposed to ads being implemented in the app that “the two changed WhatsApp’s terms of service to explicitly forbid displaying ads in the app, and complicating any future efforts to do so,” people familiar with the matter told the WSJ.

    When Facebook acquired WhatsApp, Mark Zuckerberg said he agreed that ads were not a good fit for messaging platforms. Eventually, however, Facebook starting looking for ways to recoup the $22 billion price tag and put ads on the table. Koum and Acton’s response were likely an effort to stave off Facebook’s changing views.

    As Facebook became more determined to implement ads, the two founders decided to part ways with the company, leaving “a combined $1.3 billion in deferred compensation” on the table.

    Now, it appears that Facebook has again had a change of heart. According to the WSJ, the team responsible for figuring out how to best integrate ads into WhatsApp has been disbanded, and “the team’s work was then deleted from WhatsApp’s code.”

    Instead, WhatsApp is focusing on commercial interactions, since the messaging service is increasingly being used by companies to provide customer service. This opens all new ways for Facebook to monetize the platform without undermining the privacy and security that made it what it is today.

  • Advertisers Balk At Google’s Plan To Kill Third-Party Cookies

    Advertisers Balk At Google’s Plan To Kill Third-Party Cookies

    In what is a surprise to no one, advertisers are begging Google not to kill third-party cookies in Chrome, according to CNBC.

    Google announced earlier this week its plans to phase out third-party cookies within two years. The company is trying to improve user privacy, while at the same time addressing the needs of advertisers, something it does not believe other browser makers do. While Apple’s Safari and Mozilla’s Firefox both include the ability to block third-party cookies, Google believes those solutions leave advertisers in the cold and encourage them to use more drastic and invasive methods to track users and make money.

    In their post announcing the plans, Google was light on details, promising to continue working with the web and advertising community to deliver a solution that was beneficial to all parties. That doesn’t seem to be enough for advertisers, however, as Dan Jaffe, EVP of government relations at the Association of National Advertisers, and Dick O’Brien, EVP of government relations at the American Association of Advertising Agencies, issued a statement protesting Google’s decision.

    According CNBC, the statement said Google’s plans“may choke off the economic oxygen from advertising that startups and emerging companies need to survive.”

    The advertising groups acknowledged Google’s efforts to implement an alternative to the current cookie-based methods, but urged caution so as not to disrupt the web’s ecosystem with a half-baked solution.

    “In the interim, we strongly urge Google to publicly and quickly commit to not imposing this moratorium on third party cookies until effective and meaningful alternatives are available,” the statement said.

    As CNBC highlights, these same groups have expressed opposition to California’s CCPA privacy law, so it should be no surprise they aren’t happy with anything that impedes their ability to advertise—not even in the name of protecting user privacy.

  • Google Restricting Cookies In Chrome To Improve Privacy

    Google Restricting Cookies In Chrome To Improve Privacy

    The days of cookies may be coming to an end as Google announces its plans to phase out third-party cookies within two years.

    The first indications of Google’s plans came in August when the company announced a new initiative called Privacy Sandbox. The initiative was founded in an effort to keep publishers from abusing technologies to track users. Specifically, many web publishers have found ways to work around blanket efforts to block third-party cookies with even more invasive types of tracking, such as fingerprinting. As Google describes:

    “With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected.”

    With today’s announcement, Google is looking for a more nuanced approach, one that addresses the needs of advertisers to make money in a way that does not abuse privacy. The company has been receiving feedback from W3C forums and other standards participants, feedback that indicates it is on the right track. Bolstered by this feedback, Google has committed to a timeline for its plans.

    “Once these approaches have addressed the needs of users, publishers, and advertisers, and we have developed the tools to mitigate workarounds, we plan to phase out support for third-party cookies in Chrome. Our intention is to do this within two years.”

    Google also plans to address other privacy issues, such as cross-site tracking and fingerprinting. The company has been under increasing scrutiny for Chrome’s privacy, or lack thereof. In June 2019, The Washington Post went so far as to label the browser “spy software,” and blamed it on Google’s position as both a browser maker and the single biggest cookie generator on the web. Relying on the search giant to protect user privacy is akin to relying on the fox to guard the henhouse.

    Hopefully Privacy Sandbox and Google’s commitment to phase out third-party cookies are a step in the right direction.

  • Verizon Launches OneSearch, A Privacy-Focused Search Engine

    Verizon Launches OneSearch, A Privacy-Focused Search Engine

    Verizon has announced the launch of OneSearch, a brand-new search engine focused on privacy, according to a press release.

    Privacy is increasingly becoming a major factor for tech companies, governments and users alike. The European Union’s Genera Data Protection Regulation (GDPR) privacy law went into effect in 2018. As of January 1, 2020, California implemented the California Consumer Privacy Act (CCPR), the most comprehensive privacy law in the U.S. The increased regulation, not to mention increasing consumer demand, has created both challenges and opportunities for tech companies.

    Verizon’s solution seems to be a search engine, powered by Bing, that caters toward privacy-conscious users. According to Verizon’s press release, “available for free today on desktop and mobile web at www.onesearch.com, OneSearch doesn’t track, store, or share personal or search data with advertisers, giving users greater control of their personal information in a search context. Businesses with an interest in security can partner with Verizon Media to integrate OneSearch into their privacy and security products, giving their customers another measure of control.”

    The search engine has additional advanced features, such as temporary link sharing. When Advanced Privacy Mode is enabled, any links to search results will expire in one hour.

    Users will still see ads when searching, but they will not be customized or based on the person’s search or browsing habits.

    “To allow for a free search engine experience, OneSearch is an ad-supported platform. Ads will be contextual, based on factors like search keywords, not cookies or browsing history. For example, if someone searches for ‘flights to Paris,’ they may see ads for travel booking sites or airlines that travel to Paris.”

    OneSearch does use some personal information. For example, a person’s IP address does provide general location information that can be used to provide location-specific results. Personal data is obfuscated and is never shared with search partners.

    While it is always nice to see tech giants embrace privacy, it’s hard to see the benefits of OneSearch over DuckDuckGo. DuckDuckGo has a long-standing track record of providing private search. As CNET points out, the move is also interesting coming from Verizon Media, the branch of the telecommunications company “that runs an extensive ad network with more than 70,000 web publishers and apps as customers. While the search engine aims to attract users by turning on privacy features by default, OneSearch will also let Verizon Media hone its ad-matching powers on a search engine it owns. (Verizon also owns the Yahoo search engine.)”

    It will be interesting to see what becomes of OneSearch and if it lives up to its promise of respecting people’s privacy. In the meantime, most users will probably be better off using DuckDuckGo.

  • Amazon’s Ring Fires Employees For Improperly Accessing User Videos

    Amazon’s Ring Fires Employees For Improperly Accessing User Videos

    In the wake of reports of Ring devices being hacked, Amazon has informed senators of four incidents where employees improperly accessed user videos, according to Ars Technica.

    Amazon was replying to several senators who have sent inquiries to the company regarding their Ring business. Originally, the inquiries centered around Amazon’s relationship with hundreds of law enforcement agencies to promote Ring’s cameras. As news of Ring’s security woes became widely known, a group of senators sent a follow-up inquiry regarding those breaches.

    In their response, Amazon admitted there have been four employees in the last four hears who have improperly accessed user videos. In each case, according to the company, the employees did have legitimate access to user videos, however “the attempted access to that data exceeded what was necessary for their job functions.”

    Amazon says swift action was taken to fire the employees involved and “take appropriate disciplinary action in each of these cases.” In addition, “Ring periodically reviews the access privileges it grants to its team members to verify that they have a continuing need for access to customer information for the purpose of maintaining and improving the customer experience.”

    Even with these steps, this is unwelcome attention for a company trying to prove its products can be trusted.

  • CES 2020: Ring Adds Privacy Control Center In Wake Of Hacks

    CES 2020: Ring Adds Privacy Control Center In Wake Of Hacks

    In the wake of multiple hacks and a subsequent lawsuit, Ring is off to a promising start at CES 2020, unveiling a new privacy Control Center, according to CNN.

    Ring has had a tough few weeks as multiple incidents surfaced of strangers accessing customers’ camera feeds. In one incident, a strange man talks to an 8 year-old girl via the camera in her room, while in another case a man subjected a couple to racist comments about their son.

    While Ring said these incidents were not the result of a breach of their systems, and were instead indicative of people refusing passwords that may have been hacked or accessed elsewhere, VICE tested Ring’s security and found it was abysmal. In particular, Ring offered no way of knowing who else may be accessing a camera feed—or if anyone else has ever accessed it.

    The announcement of the Control Center should go a long way toward addressing these concerns. The new tab provides a way to see who is accessing feeds, as well as whether a camera is being shared in the Neighbors app. The new feature will give users the ability to adjust the privacy settings for all of their Ring devices from a central location.

    The company plans to continue giving users more control and simplifying the interface as the Control Center evolves.

  • Xiaomi Says Camera Issue Has Been Identified And Partially Fixed

    Xiaomi Says Camera Issue Has Been Identified And Partially Fixed

    Following reports that Xiaomi cameras integrated with Google’s Nest Hub, showed images and feeds from strangers’ cameras, The Verge is reporting that Xiaomi has identified the problem.

    Dio, a user in the Netherlands, first reported the issue when he used his Google Nest Hub to access his Xiaomi camera feed. Instead, he saw a stranger’s kitchen. Repeated attempts showed a random collection of camera feeds, only occasionally displaying his own. In response, Google shut down integration between the two services, until a fix could be found.

    In a statement to The Verge, Xiaomi identified the problem as the result of a “cache update” that was rolled out on December 26. The update was supposed to improve streaming quality, but ultimately led to the glitch. Xiaomi said the glitch only occurs in “extremely rare conditions.”

    In investigating what led to the issue Dio experienced, Xiaomi told The Verge: “It happened during the integration between Mi Home Security Camera Basic 1080p and the Google Home Hub with a display screen under poor network conditions.”

    While Xiaomi says it has fixed the issue, Nest integration will remain suspended until the root cause can be identified and addressed.

    While it’s reassuring this appears to be an isolated case, it illustrates the security issues that can occur when multiple devices and services are linked together. The more complex the integration, the greater the risk of security issues creeping in.

  • ToTok Co-Creator Denies App Is A Tool For UAE Spying

    ToTok Co-Creator Denies App Is A Tool For UAE Spying

    ToTok was recently removed from both Apple and Google’s app stores over allegations it was being used by the United Arab Emirates government to spy on users. In an interview with the Associated Press, co-creator Giacomo Ziani defended the app and denied it was a tool for spying.

    ToTok was released only months ago, and quickly rose to become one of the most popular social apps. Helping drive its popularity was the fact that it was the only app offering internet calling that was allowed in the UAE. Competing apps, such as FaceTime, WhatsApp, Skype and others are not allowed.

    In a report by the New York Times—that was based on information from American officials who had access to classified intelligence—the app was accused of being a spying tool for the UAE to “track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.”

    Ziani, however, defended his creation and denied the allegations.

    “I was not aware, and I’m even not aware now of who was who, who was doing what in the past,” Ziani told the AP.

    Ziani attributed the allegations to professional jealousy, although he failed to provide any evidence to support his claim. It will be interesting to watch what happens with ToTok and whether Ziani is successful in getting the app reinstated on Apple and Google’s app stores.

    In the meantime, ToTok is a cautionary tale that illustrates the lengths some governments and organizations will go in order to spy on individuals.

  • Mozilla Bringing California Privacy Protections To All Firefox Users

    Mozilla Bringing California Privacy Protections To All Firefox Users

    The California Consumer Privacy Act (CCPA) went into effect on January 1, but Mozilla has vowed to apply its protections to all Firefox users in 2020.

    CCPA is a law California passed to protect user privacy and give people more control over how corporations can use their data. CCPA requires companies to be transparent about what data they collect and how they use it, as well as give users the ability to stop companies from selling their data.

    Microsoft was one of the first companies to publicly commit to applying CCPA protection to all of its U.S. customers. Mozilla is taking it a step further, applying CCPA rights to all Firefox users around the world. This is not the first time Mozilla has taken this stand. When the EU passed its GDPR privacy legislation, Mozilla similarly extended those protections to all users.

    Mozilla is also committing to extending these rules to so-called “telemetry data,” the anonymous technical information about browser usage that helps Mozilla improve security and performance.

    “One of CCPA’s key new provisions is its expanded definition of ‘personal data’ under CCPA. This expanded definition allows for users to request companies delete their user specific data.

    “As a rule, Firefox already collects very little of your data. In fact, most of what we receive is to help us improve the performance and security of Firefox. We call this telemetry data. This telemetry doesn’t tell us about the websites you visit or searches you do; we just know general information, like a Firefox user had a certain amount of tabs opened and how long their session was. We don’t collect telemetry in private browsing mode and we’ve always given people easy options to disable telemetry in Firefox. And because we’ve long believed that data should not be stored forever, we have strict limits on how long we keep telemetry data.

    “We’ve decided to go the extra mile and expand user deletion rights to include deleting this telemetry data stored in our systems. To date, the industry has not typically considered telemetry data ‘personal data’ because it isn’t identifiable to a specific person, but we feel strongly that taking this step is the right one for people and the ecosystem.”

    This is good news for all Firefox users and will likely help it continue to gain market share amongst privacy-minded individuals. Hopefully more companies will follow Mozilla and Microsoft’s example.