WebProNews

Tag: Privacy

  • What Not to Do: TikTok Censors ‘Ugly,’ ‘Poor’ and ‘Disabled’

    What Not to Do: TikTok Censors ‘Ugly,’ ‘Poor’ and ‘Disabled’

    It may be the one of the hottest social media platforms, but TikTok is providing a template of what not to do.

    Reporting for The Intercept, Sam Biddle, Paulo Victor Ribeiro and Tatiana Dias say that the company behind TikTok has “instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform, according to internal documents obtained by The Intercept.”

    TikTok has faced ongoing scrutiny over privacy and security concerns. The Pentagon released guidance instructing military personnel to delete the app, and the company faces a lawsuit in California over allegations it uploaded videos to China without user consent. The app has also been dogged by censorship concerns and even announced a Transparency Center, for critics to analyze how the company moderates posts.

    According to The Intercept, “moderators were also told to censor political speech in TikTok livestreams, punishing those who harmed ‘national honor’ or broadcast streams about ‘state organs such as police’ with bans from the platform.” The policy also called for TikTok moderators “to suppress uploads from users with flaws both congenital and inevitable. ‘Abnormal body shape,’ ‘ugly facial looks,’ dwarfism, and ‘obvious beer belly,’ ‘too many wrinkles,’ ‘eye disorders,’ and many other ‘low quality’ traits are all enough to keep uploads out of the algorithmic fire hose. Videos in which ‘the shooting environment is shabby and dilapidated,’ including but ‘not limited to … slums, rural fields’ and ‘dilapidated housing’ were also systematically hidden from new users, though ‘rural beautiful natural scenery could be exempted,’ the document notes.”

    Although a TikTok spokesman said the measures were anti-bullying policies that were no longer in effect, the documents The Intercept reviewed explicitly cited subscriber growth as the real reason.

    Given TikTok’s ongoing privacy and security issues, not to mention this kind of mismanagement and missteps, it’s probably a safe bet that TikTok’s growth may be about to experience a slowdown.

  • France Will Not Ban Huawei From Networks

    France Will Not Ban Huawei From Networks

    Despite U.S. pressure, France has decided to allow Huawei equipment in its 5G networks.

    According to sources who spoke exclusively to Reuters, French cybersecurity agency ANSSI will tell wireless providers to what degree they can use Huawei’s equipment.

    “They don’t want to ban Huawei, but the principle is: ‘Get them out of the core mobile network’,” one of Reuters’ sources said.

    Although not yet official, France’s decision would mirror that made by the UK, where Huawei was permitted in a limited role. The British government decided to allow Huawei equipment to comprise up to 35% of networks, while excluding it from the core network and anywhere near military bases or nuclear sites. The hope is that by keeping the company out of the core network, any security risks can be mitigated.

    The decision is another loss in the U.S. campaign to isolate the Chinese firm amid claims it serves as a spying arm for the Chinese government.

  • Vermont Sues Clearview AI For Breaking Data Laws

    Vermont Sues Clearview AI For Breaking Data Laws

    Vermont Attorney General Donovan has filed a lawsuit against Clearview AI, claiming the facial recognition firm has broken multiple state laws.

    Clearview AI has scraped millions of websites to amass a database of some 3 billion photos, on which it uses artificial intelligence to analyze. The company then makes its software available to law enforcement agencies. Despite its claims of being responsible with the data it collects, recent revelations have shown that nothing could be further from the truth.

    Clearview was caught using its software to monitor when police officers spoke with journalists and discourage them from doing so. The company’s plans to expand and form partnerships with authoritarian regimes was leaked, only to have its client list stolen, showing such expansion plans were already underway. Clearview also has claimed it only makes its software available to law enforcement and security personnel when, in fact, a wide array of investors and other individuals have had access and used the app for their own purposes.

    Now Vermont’s AG is taking measures to call the company to account. The complain, “alleges violations of the Vermont Consumer Protection Act and the new Data Broker Law. Along with the complaint, the State filed a motion for preliminary injunction, asking the Court to order Clearview AI to immediately stop collecting or storing Vermonters’ photos and facial recognition data.”

    AG Donovan did not mince any words in denouncing the company’s practices.

    “I am disturbed by this practice, particularly the practice of collecting and selling children’s facial recognition data,” Attorney General Donovan said. “This practice is unscrupulous, unethical, and contrary to public policy. I will continue to fight for the privacy of Vermonters, particularly our most vulnerable.”

    It’s safe to say individuals around the country will be rooting for AG Donovan.

  • TikTok Plans Transparency Center, Tries to Dispel Censorship Claims

    TikTok Plans Transparency Center, Tries to Dispel Censorship Claims

    TikTok has announced the upcoming launch of a new Transparency Center, aimed at pulling the curtain back on the platform’s moderation efforts.

    TikTok has faced ongoing scrutiny over privacy concerns, with at least one lawsuit alleging the company secretly recorded videos and uploaded them to servers in China. Concerns over the app prompted the Department of Defense (DOD) to instruct all personnel to uninstall the app, and for Reddit’s CEO to label the social media app “fundamentally parasitic.”

    In an effort to address concerns, including allegations it censors users, TikTok is launching its Transparency Center where outside experts will have “an opportunity to directly view how our teams at TikTok go about the day-to-day challenging, but critically important, work of moderating content on the platform.

    “Through this direct observation of our Trust & Safety practices, experts will get a chance to evaluate our moderation systems, processes and policies in a holistic manner.”

    Although the Transparency Center initially focuses on censorship, it will eventually help address other security and privacy concerns as well.

    “The Transparency Center will open in early May with an initial focus on TikTok’s content moderation. Later, we will expand the Center to include insight into our source code, and our efforts around data privacy and security. This second phase of the initiative will be spearheaded by our newly appointed Chief Information Security Officer, Roland Cloutier, who starts with the company next month.”

  • British Government Facing Rebellion Over Huawei 5G

    British Government Facing Rebellion Over Huawei 5G

    Following the UK’s decision to include Huawei in its 5G networks in a limited role, a group of Tory MPs tried to pass an amendment to stop the firm’s involvement.

    According to a BBC report, former party leader Sir Iain Duncan Smith proposed the amendment to the Telecommunications Infrastructure Bill, an amendment that would have required “high-risk vendors” to be banned from the country’s 5G architecture by the end of 2022. The amendment was defeated by 24 votes, but it signals that Prime Minister Johnson’s own party is divided on the decision.

    Of the Five Eyes countries—the U.S., UK, Australia, New Zealand and Canada—that share intelligence, the U.S., Australia and New Zealand have already banned the Chinese firm. Canada is still undecided, making the U.K. the only country that has welcomed its involvement, albeit in a limited role. As part of the decision to allow Huawei’s participation, the government agreed to limit it to 35% of network equipment and restrict it from the core network, or from being installed near military bases or nuclear sites.

    If this recent vote was any indication, the company’s role in the UK’s future networks is far from resolved.

  • Australia Taking Facebook to Court Over Privacy

    Australia Taking Facebook to Court Over Privacy

    The Australian Information Commissioner has launched legal proceedings against Facebook, accusing the company of repeated breaches of privacy law.

    Facebook allegedly used the personal information of 311,127 Australians, collected through the app This is Your Digital Life, for purposes other than advertised, including disclosing it for political profiling.

    “We consider the design of the Facebook platform meant that users were unable to exercise reasonable choice and control about how their personal information was disclosed,” said Australian Information Commissioner and Privacy Commissioner Angelene Falk in a statement.

    “Facebook’s default settings facilitated the disclosure of personal information, including sensitive information, at the expense of privacy.

    “We claim these actions left the personal data of around 311,127 Australian Facebook users exposed to be sold and used for purposes including political profiling, well outside users’ expectations.”

    If the lawsuit is successful, the court could impose a penalty of A$1,700,000 ($1.1 million) per instance. Should Facebook face the maximum penalty for all 311,127 instances, the total fine would be A$529 billion.

  • DuckDuckGo Releases Tracker Radar to Expose Hidden Tracking

    DuckDuckGo Releases Tracker Radar to Expose Hidden Tracking

    DuckDuckGo is the preeminent privacy-oriented search engine and the company is taking it a step further by releasing a tool to help expose hidden tracking.

    As the company points out, a quality tracking blocker is critical to online privacy. Without one, advertisers can amass a shocking amount of detail about web users, including location history, browsing history, shopping history and more. Combining the data they collect can even give them a pretty good idea of exactly how old a user is, their ethnicity, preferences and habits.

    When the company started exploring possibilities, it was not happy with the state of current options.

    “When we set out to add tracker protection, we found that existing lists of trackers were mostly manually curated, which meant they were often stale and never comprehensive,” reads the company’s announcement. “And, even worse, those lists sometimes break websites, which hinders mainstream adoption. So, over the last couple of years we built our own data set of trackers based on a crawling process that doesn’t have these drawbacks. We call it DuckDuckGo Tracker Radar. It is automatically generated, constantly updated, and continually tested.

    “Today we’re proud to release DuckDuckGo Tracker Radar to the world, and are also open sourcing the code that generates it. This follows our recent release of our Smarter Encryption data and crawling code (that powers the upgraded website encryption component in our apps and extensions).

    “Tracker Radar contains the most common cross-site trackers and includes detailed information about their tracking behavior, including prevalence, ownership, fingerprinting behavior, cookie behavior, privacy policy, rules for specific resources (with exceptions for site breakage), and performance data.”

    Tracker Radar is included in DuckDuckGo’s Privacy Browser for iOS and Android, as well as the Privacy essentials browser extension for Safari, Firefox and Chrome on the desktop. Developers can also download Tracker Radar and include it in their own tools.

  • Canada Undecided On Huawei, Will Not ‘Get Bullied’

    Canada Undecided On Huawei, Will Not ‘Get Bullied’

    Canada’s Minister of Innovation, Science and Industry, Navdeep Bains, has said the country will not be pressured into make a decision on Huawei.

    Canada is part of the Five Eyes group of countries that work closely on intelligence. Of the group, the U.S., Australia and New Zealand have banned Huawei from their 5G networks, while the UK has opted to include the Chinese firm in a limited role. Canada has yet to decide, but is warning the country must do what is best for itself.

    According to Bloomberg, Bains told the Canadian Broadcasting Corp. “We will make sure that we proceed in a manner that’s in our national interest. We won’t get bullied by any other jurisdictions.”

    “Countries have raised their concerns. We’re engaged with our Five Eyes partners. We know that this is a very important issue,” he added. “But we will make a decision that makes sense for Canadians and protects Canadians.”

    The U.S. has been pressuring its allies, both in the Five Eyes and EU, to ban Huawei. It’s safe to say the U.S. certainly wants to win over its closest ally geographically but, based on Bains’ remarks, that may be easier said than done.

  • Clearview AI Caught Lying About Who Can Use Its Software

    Clearview AI Caught Lying About Who Can Use Its Software

    The hits keep on coming: Clearview AI has been caught lying about who can access its controversial facial recognition software.

    Clearview has amassed a database of billions of photos, scraped from millions of websites, including the biggest social media platforms. The company then makes that database available through its facial recognition software. Since The New York Times broke the story in January, Clearview has faced ongoing criticism from lawmakers and privacy advocates alike who say the company represents a fundamental threat to privacy.

    To make matters worse, Buzzfeed discovered documents proving the company plans to expand internationally, including with authoritarian regimes. Following that, Clearview’s entire client list was stolen, showing its international expansion has already begun.

    Amid the scrutiny and controversy, Clearview has tried to reassure critics that it is responsible in its use of its database. In fact, in a blog post on the company’s site, Clearview says its “search engine is available only for law enforcement agencies and select security professionals to use as an investigative tool.”

    Similarly, the company’s Code of Conduct emphasizes their software is for law enforcement and security professionals, and that they hold themselves to a high standard of ethics, integrity and professionalism.

    There’s just one problem: it’s not true, if the NYT’s report is accurate. According to the report, the NYT “has identified multiple individuals with active access to Clearview’s technology who are not law enforcement officials. And for more than a year before the company became the subject of public scrutiny, the app had been freely used in the wild by the company’s investors, clients and friends.

    “Those with Clearview logins used facial recognition at parties, on dates and at business gatherings, giving demonstrations of its power for fun or using it to identify people whose names they didn’t know or couldn’t recall.”

    This is just the latest example of the irresponsible and unethical way Clearview has conducted business.

  • Intel’s CSME Bug Is ‘Unfixable’

    Intel’s CSME Bug Is ‘Unfixable’

    Intel has been struggling to fix security flaws in its processors, with researchers warning the current flaw is “unfixable.”

    Security firm Positive Technologies has discovered that one of the most recent issues is far more severe than previously thought. The vulnerability impacts the ROM of the Converged Security and Management Engine (CSME). The CSME is a subsystem chipset that is part of Intel’s Active Management Technology (AMT), and allows remote out-of-band management, useful for business and enterprise, but largely unnecessary for the consumer market.

    According to Positive Technologies, the latest discovery has chilling ramifications:

    “By exploiting vulnerability CVE-2019-0090, a local attacker could extract the chipset key stored on the PCH microchip and obtain access to data encrypted with the key,” reads the report. “Worse still, it is impossible to detect such a key breach. With the chipset key, attackers can decrypt data stored on a target computer and even forge its Enhanced Privacy ID (EPID) attestation, or in other words, pass off an attacker computer as the victim’s computer. EPID is used in DRM, financial transactions, and attestation of IoT devices.”

    While Intel is recommending impacted users contact their motherboard manufacturer for a BIOS update, Positive Technologies is warning that will not fix the underlying issue.

    “Since it is impossible to fully fix the vulnerability by modifying the chipset ROM, Positive Technologies experts recommend disabling Intel CSME based encryption of data storage devices or considering migration to tenth-generation or later Intel CPUs. In this context, retrospective detection of infrastructure compromise with the help of traffic analysis systems such as PT Network Attack Discovery becomes just as important.”

    This is just the latest in a number of serious issues Intel has had with its recent chipsets, and could make offerings from AMD and ARM an increasingly appealing alternative.

  • Senators Urge UK to Reconsider Using Huawei

    Senators Urge UK to Reconsider Using Huawei

    Following the UK’s decision to include Huawei in their 5G networks, U.S. senators are urging the House of Commons to reconsider.

    A bipartisan group of 20 senators have penned a letter to the House of Commons to express “significant concern with the Government of the United Kingdom’s recent decision to allow Huawei Technologies in its 5G network infrastructure. Given the significant security, privacy, and economic threats posed by Huawei, we strongly urge the United Kingdom to revisit its recent decision, take steps to mitigate the risks of Huawei, and work in close partnership with the U.S. on such efforts going forward.”

    The senators go on to point out that the UK has already “warned that Huawei’s telecommunications equipment raises ‘significant’ security issues,” and highlights the Chinese government’s track record of compelling Chinese companies to cooperate with its intelligence-gathering efforts.

    The letter concludes by thanking the House of Commons for its “consideration of this critical issue, as well as for the trusted partnership between our governments which we remain committed to uphold.”

    The senators’ letter is the latest in efforts by U.S. officials to isolate Huawei and restrict its growth worldwide. Whether such efforts will succeed remains to be seen.

  • Huawei Making Its Own Chips to Bypass U.S. Ban

    Huawei Making Its Own Chips to Bypass U.S. Ban

    Huawei is turning to its own chip-making abilities in an effort to bypass a ban cutting off its access to U.S. technology.

    The U.S. has alleged that Huawei maintains backdoors in its network equipment, backdoors that are reserved for law enforcement use. As a result, officials have claimed Huawei represents a clear security risk, and that its equipment could be used by Beijing to spy on companies and governments around the world. In fact, Huawei has been accused of basically being an arm of the Chinese government.

    In an effort to slow Huawei’s dominance, the U.S. banned the company and prohibited U.S. firms from doing business with it without special license. That has yet to slow its growth, however, as the company continues to be one of the dominant network equipment providers.

    Huawei is stepping up its efforts to bypass the U.S. ban. According to Bloomberg, the company is turning to its own chip-making capabilities, selling as many as 50,000 network base stations in the fourth quarter, base stations that are completely free of U.S. chips or technology. Ultimately, the company would prefer to go back to using U.S. chips, but it may soon be too late.

    “It’s still our intention to return to using U.S. technology,” Tim Danks, U.S. executive in charge of partner relations, told Bloomberg. Danks did, however, acknowledge that the longer Huawei uses its own chips, the harder it will be to go back to U.S. chips. This is likely a result of the natural decisions, dependencies and forks in the road that come with any development cycle.

    Either way, the ongoing battle between the U.S. and Huawei shows no sign of abating.

  • Clearview AI App Disabled On the App Store

    Clearview AI App Disabled On the App Store

    Clearview AI’s troubles continue to mount, with the company’s app being disabled on the App Store for violating Apple’s rules.

    Buzzfeed News first noticed that Clearview was doing an end-run around Apple’s distribution rules, “encouraging those who want to use the software to download its app through a program reserved exclusively for developers.” Buzzfeed contacted Apple to inquire about the situation, prompting Apple to investigate. As a result of their investigation, Apple suspended Clearview’s developer account, preventing the app from functioning. Apple told Buzzfeed the developer program Clearview was using is only for distributing apps within a company, not the kind of widescale distribution Clearview was using it for.

    In statement obtained by Buzzfeed, Clearview CEO Hoan Ton-That said: “We are in contact with Apple and working on complying with their terms and conditions. The app can not be used without a valid Clearview account. A user can download the app, but not perform any searches without proper authorization and credentials.”

    Clearview has been on an impressive streak of earning the disfavor of politicians, corporations, privacy advocates, journalists and citizens alike. The company has scraped millions of websites to amass a facial recognition database of some three billion photos, in the process violating the terms of service for industry giants like Google, YouTube, Facebook and Twitter. The company has been accused of monitoring how police are using the app to discourage them from interacting with journalists. Clearview was suspected of planning worldwide expansion, including to oppressive regimes, only to have its client list stolen, which showed it has already moved forward with those plans.

    Now the company has managed to violate Apple’s rules about how developers can or cannot distribute apps. Given the company’s shady practices, it’s a safe bet no one will be shedding a tear over this one.

  • FCC Announces Carrier Fines For Selling Customer Data

    FCC Announces Carrier Fines For Selling Customer Data

    The FCC has officially unveiled its proposed fines for wireless carriers over selling customer data to third parties, with T-Mobile receiving the highest fines.

    The FCC’s announcement (PDF) comes after all four major carriers were found guilty of selling customer location data to third parties without consent. This arrangement violated the requirement that telecom companies be the sole gateway for the government to conduct lawful surveillance.

    In at least one instance, “a Missouri Sheriff, Cory Hutcheson, used a ‘location-finding service’ operated by Securus, a provider of communications services to correctional facilities, to access the location information of the wireless carriers’ customers without their consent between 2014 and 2017. In some cases, Hutcheson provided Securus with irrelevant documents like his health insurance policy, his auto insurance policy, and pages from Sheriff training manuals as evidence of his authorization to access wireless customer location data.”

    In response to public outcry from journalists, privacy advocates and lawmakers, the FCC investigated, resulting in the proposed fines. The FCC proposes fining T-Mobile $91 million, AT&T $57 million, Verizon $48 million and Sprint more than $12 million. While the proposed fines are a significant amount of money, critics have already denounced them as not going far enough.

    Senator Ron Wyden, a well-known privacy advocate, was scathing in his response:

    If reports are true, then Ajit Pai has failed to protect consumers at every turn. This issue came to light after my office and dedicated journalists discovered how wireless carriers shared Americans’ locations without consent. He investigated only after public pressure mounted.

    — Ron Wyden (@RonWyden) February 27, 2020

    It remains to be seen if the carriers will appeal the fines. Given the reaction that is already building, they may do well to simply pay the fines and move on. Meanwhile, other companies should take a lesson that it’s never a good idea to try to double-dip by surreptitiously selling the data of paying customers who expect far better for the money they’re spending.

  • Brexit Means No GDPR Protection: Google May Move UK User Data

    Brexit Means No GDPR Protection: Google May Move UK User Data

    Brexit may have finally happened, but one side effect people may not have anticipated is losing GDPR protection as Google may be moving UK data out of the EU.

    The General Data Protection Regulation (GDPR) is one of the most sweeping, comprehensive data protection regulations in the world, aimed at giving people control of their own data and digital footprint. With Britain leaving the EU, sources have told Reuters that Google plans on moving its customers’ data to the U.S.

    British Google users’ data is currently housed in Ireland, which is staying in the EU. To date, Britain has not committed to following the GDPR or implementing its own solution. Google evidently has some concerns that leaving its British data in Ireland would make it harder for British authorities to access it if the UK does not continue abiding by the GDPR.

    As Reuters points out, the decision is likely encouraged by the fact that the U.S. has one of the weakest set of privacy laws of any major economy. Google will likely welcome the opportunity to deal with less oversight.

  • Google Cracking Down On How Android Apps Use Location Data

    Google Cracking Down On How Android Apps Use Location Data

    Google is making some welcome changes to how Android apps handle location data, making it easier for users to protect theirs.

    In a company blog post, Google announced it is making changes that will sound eerily similar to features that made their way to iOS 13, including the ability to only share location a single time.

    “Now in Android 11, we’re giving users even more control with the ability to grant a temporary ‘one-time’ permission to sensitive data like location,” wrote Krish Vitaldevara, Director of Product Management Trust & Safety, Google Play. “When users select this option, apps can only access the data until the user moves away from the app, and they must then request permission again for the next access.”

    Google also noticed that many apps accessing location data in the background didn’t actually need it and could function just as well only accessing location data when active. As a result, Google will be updating Google Play policies later this year to clearly outline when an app can or cannot access location data in the background. These rules will apply equally to Google’s own apps.

    These changes are good news for all Android users and come at a time when privacy is becoming more important than ever.

  • ISPs Sue Maine Over Privacy Law

    ISPs Sue Maine Over Privacy Law

    Internet service providers (ISP) are suing the state of Maine to prevent a law designed to protect consumer privacy from going into effect.

    In June 2019, Maine Governor Janet Mills signed a law designed to prevent ISPs from “the use, sale, or distribution of a customer’s personal information by internet providers without the express consent of the customer.” The law had bipartisan support and passed the state senate unanimously.

    According to Ars Technica, the data covered by the law includes “Web-browsing history, application-usage history, precise geolocation data, the content of customers’ communications, IP addresses, device identifiers, financial and health information, and personal details used for billing.” All of the above data is extremely valuable to ISPs, giving them plenty of motivation to fight the law.

    The lawsuit cites the First Amendment and the U.S. Constitution’s Supremacy Clause. The ISPs say their First Amendment rights will be violated by their being limited from advertising and marketing to their customers. They say the law violates the Supremacy Clause because a prohibition against sharing data would prevent the ISPs from cooperating with federal agencies.

    Given that a recent court ruling allows states to set laws governing privacy and net neutrality, laws that may go beyond those the federal government enacts, the ISPs may have an uphill battle winning their case. It’s probably a safe bet the citizens of Maine are rooting against them.

  • Ring Making Major Changes To Improve Privacy

    Ring Making Major Changes To Improve Privacy

    After ongoing issues, Ring has informed users it is implementing a number of changes to improve privacy and security.

    Ring’s blog post comes as the company is trying to do damage control over a number of mishandled privacy issues. First there were multiple reports of the company’s cameras being hacked, followed by VICE investigating the service’s security and finding it wanting, to say the least. The worst revelation came when the Electronic Frontier Foundation (EFF) found that Ring was sharing personally identifiable data with a number of companies, without properly disclosing it to consumers. Ring’s response did nothing to help the situation, admitting they were sharing data with more companies than they said, but that customers should trust they were doing it responsibly.

    In the company’s blog post, Ring tries to address multiple concerns, beginning with two-factor authentication.

    “While we already offered two-factor authentication to customers, starting today we’re making a second layer of verification mandatory for all users when they log into their Ring accounts,” reads the blog post. “This added authentication helps prevent unauthorized users from gaining access to your Ring account, even if they have your username and password.”

    The company also addressed its data sharing policies.

    “Ring does not sell your personal information to anyone. We occasionally collaborate with third-party service providers that specialize in delivering different benefits, such as identifying and solving your problems faster when you contact Ring Community Support, providing you with personalized Ring offers and discounts, and communicating important alerts about your devices, like when your battery is low. Collaborating with these third-party service providers allows us to deliver the best possible Ring experience to you.”

    Ring says it is implementing a number of changes. First it is temporarily pausing most third-party analytics data sharing. Second, the company is also providing customers a way of opting out of third-party data sharing for personalized ads.

    Overall, this is a good first step for the company. If Ring had built its service with these steps already in place, they would not have spent the last couple of months losing customer trust and doing damage control.

  • Ring Is a Case Study In Bad Privacy Policy

    Ring Is a Case Study In Bad Privacy Policy

    Ring has been in the news for its ongoing struggles with privacy issues. Its latest response, not to mention its approach in general, could serve as a case study of what not to do.

    Ring was first in the news over a number of incidents where individuals were able to hack the cameras, spy on and interact with the owners. Following that, VICE tested Ring’s security and found it was abysmal. The nail in the coffin was the Electronic Frontier Foundation’s (EFF) investigation that showed Ring was sharing a load of identifiable information with third-parties. The worst part is that users were not notified of what data was being collected and shared, let alone given a way to control or opt-out of the collection.

    Now CBS News is reporting that “although it confirmed that it shares more data with third parties than it previously told users, the company said in a statement that it contractually limits its partners to use the data only for ‘appropriate purposes,’ including helping Ring improve its app and user experience.”

    Essentially, the company is saying “yes, we got caught doing something we shouldn’t have been doing, but you should totally trust us that we’re doing it responsibly.”

    Ring’s troubles and their response should be a lesson to every company that deals with customers’ private data: A strong commitment to privacy should NEVER be an afterthought, add-on or damage control. In an era when hackers are eager to take advantage of weak data policies, when companies look to profit from their customers’ data and when an interconnected world means that a single breach can have far-reaching consequences—privacy must be built-in from the ground up.

    The fact that it should especially be built-in from the ground up in a service that is designed specifically to protect user privacy and security should go without saying. However, since Ring obviously needed someone to say it, the company should stand as an example of what not to do when it comes to protecting customer privacy.

  • Senator Kirsten Gillibrand: ‘The U.S. Needs a Data Protection Agency’

    Senator Kirsten Gillibrand: ‘The U.S. Needs a Data Protection Agency’

    Senator Kirsten Gillibrand is introducing new legislation to create a Data Protection Agency.

    Senator Gillibrand makes the case that people have untold amounts of data about them scattered across the internet. Even worse, much of that data was collected without consent or, at the very least, without users knowingly agreeing to it being collected. In the digital age, that data represents a gold mine for countless companies who profit from it.

    “I believe that this needs to be fixed, and that you deserve to be in control of your own data,” writes Gillibrand. “You have the right to know if companies are using your information for profit. You need a way to protect yourself, and you deserve a place that will look out for you.”

    Specifically, the legislation Gillibrand is introducing, The Data Protection Act, would “establish an independent federal agency, the Data Protection Agency, that would serve as a ‘referee’ to define, arbitrate, and enforce rules to defend the protection of our personal data.”

    The agency would focus on returning control of their data to Americans, support innovation while ensuring fair competition and help advise Congress of digital threats as they emerge, making sure the government is educated and prepared to meet those threats.

    Gillibrand’s announcement comes amid a growing focus on privacy. Salesforce co-CEO Keith Block recently said the U.S. needed a national privacy law; the California Consumer Privacy Act (CCPA) became law January 1; and Clearview AI has gained infamy as the company “that can end privacy.”

    It remains to be seen if Gillibrand will have the necessary support to pass The Data Protection Act, but it definitely will be welcomed in many circles as a step in the right direction.

  • Senators Introduce Bill to Temporarily Ban Law Enforcement Facial Recognition

    Senators Introduce Bill to Temporarily Ban Law Enforcement Facial Recognition

    Two senators have introduced a bill to temporarily ban facial recognition technology for government use.

    The proposed bill (PDF) comes in the wake of revelations that law enforcement agencies across the country have been using Clearview AI’s software. The company claims to have a database of billions of photos it has scraped from millions of websites, including the most popular social media platforms, such as Facebook, Twitter and YouTube. Those companies, along with Google, have sent cease-and-desist letters to the facial recognition firm, demanding it stop scraping their sites and delete any photos it has already acquired. The New Jersey Attorney General even got in on the action, ordering police in the state to stop using the software when he was made aware of it.

    Now Senators Jeff Merkley (Oregon) and Cory Booker (New Jersey) are calling for a “moratorium on the government use of facial recognition technology until a Commission recommends the appropriate guidelines and limitation for use of facial recognition technology.”

    The bill goes on to acknowledge the technology is being marketed to law enforcement agencies, but often disproportionately impacts “communities of color, activists, immigrants, and other groups that are often already unjustly targeted.”

    The bill also makes the point that the congressional Commission would need to create guidelines and limitations that would ensure there is not a constant state of surveillance of individuals that destroys a reasonable level of anonymity.

    Given the backlash and outcry against the Clearview AI revelations, it’s a safe bet the bill will likely pass.