WebProNews

Tag: Surveillance

  • More Surveillance Than China — Clearview AI’s Business Plan

    More Surveillance Than China — Clearview AI’s Business Plan

    Few companies would proudly tout their business plan as offering more comprehensive surveillance than China, but that’s exactly what Clearview AI is doing.

    Clearview AI gained fame and notoriety for scraping images from popular websites and social media platforms in an effort to build a massive database of photos for facial recognition — and in violation of those platforms’ terms. The company claimed to only provide its software to law enforcement and government agencies, but reports indicate it was far more loose than it admitted, in terms of who had access to its platform. In addition, the company was found to be working with various authoritarian regimes.

    As if the company couldn’t become anymore controversial, The Washington Post reports the company is proudly calling its surveillance platform more comprehensive than similar systems in China, thanks to the “public source metadata” and “social linkage” information the company bases its product on.

    Clearview is also working to establish itself as the leader in the field, at a time when the industry leaders are taking a more responsible, measured approach to facial recognition. Clearview, in contrast, sees Microsoft, Amazon, and IBM’s cautious approach as a market opportunity, as it seeks to gain investment for a massive expansion effort.

    What’s more, accord to The Post, the company is sending out conflicting messages about its plans. Until now, Clearview has promised it will only sell to law enforcement and government agencies. In the presentation material view by The Post, however, government contracts are shown as only making up a small portion of the company’s potential market. The presentation material discusses building out the company’s personnel, specifically to target the financial and commercial market. Even more alarming, Clearview wants to build a “developer ecosystem” to help other companies use its database in their own products.

    Jack Poulson, a former Google research scientist and current head of research advocacy group Tech Inquiry, asked if there was anything “they wouldn’t sell this mass surveillance for? If they’re selling it for just regular commercial uses, that’s just mass surveillance writ large. It’s not targeted toward the most extreme cases, as they’ve pledged in the past.”

    Clearview’s unethical behavior and irresponsible approach to privacy and data security, not to mention the legal implications of its data collection, have already led to multiple lawsuits, investigations, and bans in some countries and jurisdictions.

    Here’s to hoping more countries crack down on this bottom-feeder.

  • AWS Bans NSO Group Behind Pegasus Spyware Used Against Journalists

    AWS Bans NSO Group Behind Pegasus Spyware Used Against Journalists

    Amazon Web Services has shut down the accounts of Israeli surveillance firm NSO Group, following explosive revelations of its software being used to target activist and journalists.

    The Washington Post reported that NSO Group’s software, which is normally used to combat terrorists and criminals, “was used in attempted and successful hacks of 37 smartphones belonging to journalists, human rights activists, business executives and two women close to murdered Saudi journalist Jamal Khashoggi.”

    The reaction has been swift and severe, with the company pledging to investigate the incidents. Nonetheless, Motherboard has reported that AWS is shutting down accounts linked to the Israeli company.

    “When we learned of this activity, we acted quickly to shut down the relevant infrastructure and accounts,” an AWS spokesperson told Motherboard in an email.

    This issue is a potential minefield for AWS, since a forensic report by Amnesty International shows NSO Group recently started using AWS services, with captured data from its software being sent to a service on Amazon CloudFront.

    Given the accusations against NSO Group — especially targeting human rights activists and journalists — it’s likely AWS’ response won’t be the last repercussions the company faces.

  • France Clears Microsoft and Google’s Cloud Technology for Sensitive Data

    France Clears Microsoft and Google’s Cloud Technology for Sensitive Data

    France has decided Google and Microsoft’s cloud technology can be used for sensitive data — with caveats.

    As cloud computing becomes more important to organizations around the globe, there is a growing concern about the risk of US surveillance of cloud data. The EU, in particular, has increasingly looked with suspicion and distrust at US providers.

    France appears to have come up with a solution, clearing Microsoft and Google’s technology for use in sensitive applications, according to Reuters. France will allow the companies’ technology to be used as part of a homegrown solution, as long as the servers are operated on EU soil and the companies storing and processing the data are European-owned.

    “We therefore decided that the best companies – I’m thinking in particular of Microsoft or Google – could license all or part of their technology to French companies,” said French Finance Minister Bruno Le Maire.

    Companies that help create solutions meeting France’s requirements will receive a “trustworthy cloud” label.

    “We… hope that other Franco-American alliances will emerge in this area, which will allow us to have the best technology while guaranteeing the independence of French data,” said Minister for Digital Affairs Cedric O.

  • ’Fourth Amendment Is Not For Sale Act’ Tackles Warrantless Surveillance

    ’Fourth Amendment Is Not For Sale Act’ Tackles Warrantless Surveillance

    A proposed piece of legislation would tackle surveillance and the warrantless purchase of individual location data.

    The “Fourth Amendment Is Not For Sale Act” is a bill that has wide bipartisan support and would address some of the biggest challenges in the realm of surveillance. Clearview AI made headlines in early 2020 as it built a business model on scraping images from social media networks and using them to build an AI-powered facial recognition database.

    Clearview AI sold access to its database to law enforcement agencies all over the country, transactions that were performed without a warrant. Other companies have been accused of doing the same thing, selling location data to law enforcement agencies without due process or authorized warrants.

    The Fourth Amendment Is Not For Sale Act would address that loophole, ensuring courts have a say in the process.

    “Doing business online doesn’t amount to giving the government permission to track your every movement or rifle through the most personal details of your life,” Senator Ron Wyden said. “There’s no reason information scavenged by data brokers should be treated differently than the same data held by your phone company or email provider. This bill closes that legal loophole and ensures that the government can’t use its credit card to end-run the Fourth Amendment.”

    “The Fourth Amendment’s protection against unreasonable search and seizure ensures that the liberty of every American cannot be violated on the whims, or financial transactions, of every government officer,” Senator Rand Paul said. “This critical legislation will put an end to the government’s practice of buying its way around the Bill of Rights by purchasing the personal and location data of everyday Americans. Enacting the Fourth Amendment is Not For Sale Act will not only stop this gross abuse of privacy, but also stands for the fundamental principle that government exists to protect, not trade away, individual rights.”

  • Hackers Access 150,000 Security Cameras: Tesla, Hospitals and Prisons Exposed

    Hackers Access 150,000 Security Cameras: Tesla, Hospitals and Prisons Exposed

    A groups of hackers has gained access to roughly 150,000 Verkada security cameras, exposing a slew of customer live feeds.

    Verkada is a Silicon Valley startup that specializes in security systems. The company’s cameras are used by a wide range of companies and organizations, including Tesla, police departments, hospitals, clinics, schools and prisons.

    The group responsible is an international collective of hackers. They claim to have hacked Verkada to shed light on how pervasive surveillance has become.

    In one of the videos, seen by Bloomberg, eight hospital staffers are seen tackling a man and restraining him. Other video feeds include women’s clinics, as well as psychiatric hospitals. What’s more, some of the feeds — including those of some hospitals — use facial recognition to identify and categorize people.

    The feeds from the Madison Country Jail in Huntsville, Alabama were particularly telling. Of the 330 cameras in the jail, some were “hidden inside vents, thermostats and defibrillators.”

    The entire case is disturbing on multiple fronts. It’s deeply concerning that a company specializing in security, and selling that security to other organizations, would suffer such a devastating breach. It’s equally concerning, however, to see the depth of surveillance being conducted, as well as the lengths being taken to hide the surveillance.

  • Clearview AI Dealt Blow in Canada, Called Illegal

    Clearview AI Dealt Blow in Canada, Called Illegal

    Clearview AI has been dealt its biggest blow yet, with Canada calling the app illegal and demanding it delete photos of Canadian citizens.

    Clearview AI made headlines last year when the depth of its activities were uncovered. The company scraped photos from countless websites, including the top social media platforms, and amassed a database of billions of photos. Clearview then sold access to that database to law enforcement officials all over the country.

    Despite its claims, however, Clearview wasn’t the responsible purveyor of information it claimed to be. Instead, it gave investors, clients and friends access to the company’s database for their own personal uses, including entertainment. The company also began expanding internationally, working on deals with authoritarian regimes.

    Despite multiple investigations in the US, it appears Canada has taken the strongest stance yet, declaring the software illegal.

    “Clearview sells a facial recognition tool that allows law enforcement and commercial organizations to match photographs of unknown people against a massive databank of 3 billion images, scraped from the Internet,” said Daniel Therrien, Privacy Commissioner of Canada. “The vast majority of these people have never been, and will never be, implicated in any crime.

    “What Clearview does is mass surveillance and it is illegal. It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

    Clearview tried to make the claim that it did not need permission to collect the photos it uses, since they’re already posted on social media. The Canadian government disagreed, since Clearview’s purpose for collecting the photos differed from the reason people uploaded them.

    As a result, the investigation came to the following conclusion:

    We recommended that Clearview: (i) cease offering its facial recognition tool to clients in Canada; (ii) cease the collection, use and disclosure of images and biometric facial arrays collected from individuals in Canada; and (iii) delete images and biometric facial arrays collected from individuals in Canada in its possession.

    While the government doesn’t yet have the authority to enforce the investigation’s recommendations, Therrien is hopeful Parliament will take them under advisement when it considers upcoming privacy legislation.

    “The company essentially claims that individuals who placed or permitted their images to be placed on the Internet lacked a reasonable expectation of privacy in such images, that the information was publicly available, and that the company’s appropriate business interests and freedom of expression should prevail,” Therrien added.

    “My colleagues and I think these arguments must be rejected. As federal Commissioner, I hope that Parliament considers this case as it reviews Bill C-11, the proposed new private-sector privacy legislation. I hope Parliamentarians will send a clear message that where, as here, there is a conflict between commercial objectives and privacy protection, Canadians’ privacy rights should prevail.”

  • IRS Under Investigation For Illegally Tracking Americans via Their Phones

    IRS Under Investigation For Illegally Tracking Americans via Their Phones

    The IRS is under investigation by the US Treasury’s Inspector General for purchasing smartphone data to illegally track Americans.

    The issue began when Senators Ron Wyden and Elizabeth Warren sent a letter to the Inspector General demanding the IRS be investigated. According to the letter, the IRS had been purchasing bulk data from a company named Venntel. The information contained location data from Americans’ phones, based on the various apps they use.

    According to Motherboard, a Wyden aide has said “the IRS wanted to find phones, track where they were at night, use that as a proxy as to where the individual lived, and then use other data sources to try and identify the person. A person who used to work for Venntel previously told Motherboard that Venntel customers can use the tool to see which devices are in a particular house, for instance.”

    As Wyden and Warren’s letter points out, the Supreme Court ruled in 2018 that collecting significant quantities of historical data from phones was covered under the Fourth Amendment, and therefore requires a search warrant. The fact that the IRS obtained no such warrant puts it in legally dubious territory.

    Putting aside the legal ramifications, it’s a safe bet that few Americans would be OK with the IRS tracking where they sleep at night.

  • Amazon Follows IBM, Bans Police Use of Rekognition

    Amazon Follows IBM, Bans Police Use of Rekognition

    Amazon has announced a one-year moratorium on police use of its facial recognition software, Rekognition.

    IBM previously announced it was ending the sale of general purpose facial recognition software in an effort to support civil rights and police reform. Now Amazon is following suit, banning police use of its own facial recognition software for one year.

    Amazon’s statement, in its entirety, reads:

    We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.

    We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.

    When IBM announced its decision, we wrote: “In the wake of recent events, however, it’s likely IBM won’t be the only company to take such a stand.”

    Amazon has proved that statement true, and it will likely not be the last company to do so.

  • ‘Fool Me Once…’ — Clearview AI Promises to End Private Contracts

    ‘Fool Me Once…’ — Clearview AI Promises to End Private Contracts

    Clearview AI has promised it will end all contracts with private companies in the face of public backlash and lawsuits.

    Clearview made news as a facial recognition firm that had scraped billions of images from the web and social media, and then made them available for facial recognition searches. The company has repeatedly tried to portray itself as a responsible steward of the technology it has developed and is making available, initially claiming its service was only for law enforcement and government agencies.

    In short order, however, it has become apparent Clearview cannot be trusted. Reports surfaced that the company was selling its services internationally, including to oppressive regimes. One of the more disturbing revelations was that the company was monitoring the searches performed by law enforcement and using that information to try to discourage police from talking with journalists.

    Throughout it all, however, the company has continued to maintain that it only made its software available to law enforcement and select security personnel — only that wasn’t true. Reports showed the company had made its software available to a many private companies and individuals, including some who used it for their own personal benefit.

    According to BuzzFeed, in an effort to deal with the lawsuit it is facing in Illinois, the company is now promising it will cancel its contracts with private organizations.

    “Clearview is cancelling the accounts of every customer who was not either associated with law enforcement or some other federal, state, or local government department, office, or agency,” the company said in a filing BuzzFeed has seen. “Clearview is also cancelling all accounts belonging to any entity based in Illinois.”

    There’s only one problem with this promise: It comes from a company that has already proven itself to be dishonest, unscrupulous and completely untrustworthy. Here’s to hoping the judge sees right through this latest ploy.

  • FBI Using Fitness App to Track You

    FBI Using Fitness App to Track You

    It was bound to happen. With mass surveillance being one of the most effective tools in the fight against the coronavirus pandemic, the FBI may be taking the first steps.

    Monday the FBI sent out a tweet recommending their fitness app for individuals looking for ways to stay active and fit while stuck indoors as a result of the virus.

    #MondayMotivation Are you looking for tips for indoor workouts? Download the #FBI’s Physical Fitness Test app to learn proper form for exercises you can do at home like pushups and situps. http://ow.ly/6y3f50yQeHj

    — FBI (@FBI) 3/23/20

    As multiple users started pointing out, however, when the app is downloaded, it asks for specific location information, as well as what WiFi networks you connect to. While Twitter may not always be the bastion of sound, measured responses, in this case the Twitterverse appears to be spot on in largely taking a hard pass on downloading the app.

    The app is, at least in part, governed by the Privacy Policy posted on fbi.gov, especially when the app is accessing the site. That policy makes the following statement:

    “To protect the system from unauthorized use and to ensure that the system is functioning properly, individuals using this computer system are subject to having all of their activities monitored and recorded by personnel authorized to do so by the FBI (and such monitoring and recording will be conducted). Anyone using this system expressly consents to such monitoring and is advised that if such monitoring reveals evidence of possible abuse or criminal activity, system personnel may provide the results of such monitoring to appropriate officials. Unauthorized attempts to upload or change information or otherwise cause damage to this service are strictly prohibited and may be punishable under applicable federal law.”

    In view of that statement, it looks as though it is technically possible for the FBI to legally justify using the app for surveillance. Consider yourself forewarned.

  • Senators Introduce Bill to Temporarily Ban Law Enforcement Facial Recognition

    Senators Introduce Bill to Temporarily Ban Law Enforcement Facial Recognition

    Two senators have introduced a bill to temporarily ban facial recognition technology for government use.

    The proposed bill (PDF) comes in the wake of revelations that law enforcement agencies across the country have been using Clearview AI’s software. The company claims to have a database of billions of photos it has scraped from millions of websites, including the most popular social media platforms, such as Facebook, Twitter and YouTube. Those companies, along with Google, have sent cease-and-desist letters to the facial recognition firm, demanding it stop scraping their sites and delete any photos it has already acquired. The New Jersey Attorney General even got in on the action, ordering police in the state to stop using the software when he was made aware of it.

    Now Senators Jeff Merkley (Oregon) and Cory Booker (New Jersey) are calling for a “moratorium on the government use of facial recognition technology until a Commission recommends the appropriate guidelines and limitation for use of facial recognition technology.”

    The bill goes on to acknowledge the technology is being marketed to law enforcement agencies, but often disproportionately impacts “communities of color, activists, immigrants, and other groups that are often already unjustly targeted.”

    The bill also makes the point that the congressional Commission would need to create guidelines and limitations that would ensure there is not a constant state of surveillance of individuals that destroys a reasonable level of anonymity.

    Given the backlash and outcry against the Clearview AI revelations, it’s a safe bet the bill will likely pass.

  • FCC Finds Carriers Broke the Law by Selling Location Data

    FCC Finds Carriers Broke the Law by Selling Location Data

    The Federal Communications Commission (FCC) has found that wireless carriers violated federal law in selling customer location data to third-parties.

    FCC Chairman Ajit Pai has sent a letter to several lawmakers informing them of the results of the agency’s investigation. According to Engadget, in 2018 it first came to light that wireless carriers were selling “their customers’ real-time location data to aggregators, which then resold it to other companies or even gave it away.”

    Senator Ron Wyden brought to Chairman Pai’s attention the case of prison phone company Securus Technologies. Securus was buying wireless location data and providing “that information, via a self-service web portal, to the government for nothing more than the legal equivalent of a pinky promise. This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government conducts surveillance of Americans’ phone records, and needless exposes million of Americans to potential abuse and surveillance by the government.”

    Once the information came to light, Verizon was the first to promise to stop the practice, with the other three carriers following suit. Even so, the FCC launched an investigation to determine if federal laws were broken, and it appears they were.

    In the letters, Chairman Pai said:

    “Fulfilling the commitment I made in that letter, I wish to inform you that the FCC’s Enforcement Bureau has completed its extensive investigation and that it has concluded that one or more wireless carriers apparently violated federal law.

    “I am committed to ensuring that all entities subject to our jurisdiction comply with the Communications Act and the FCC’s rules, including those that protect consumers’ sensitive information, such as real-time location data. Accordingly, in the coming days, I intend to circulate to my fellow Commissioners for their consideration one or more Notice(s) of Apparent Liability for Forfeiture in connection with the apparent violation(s).”

    That last part, in particular, is an indication the FCC will take some form of action against the offending parties.

    It’s one thing when companies offering a free service look for ways to profit off of their customers’ data—with the proper disclosures, of course. It’s quite another when companies that already charge for the service they offer then proceed to double-dip by selling their customers’ data, let alone doing it without properly disclosing it. It’s nice to see the FCC agrees such behavior is illegal, not to mention unethical.

  • Ring Uses Android Doorbell App to Surveil Customers

    Ring Uses Android Doorbell App to Surveil Customers

    The Electronic Frontier Foundation (EFF) has discovered that Ring’s Android doorbell camera app is being used to surveil customers.

    According to the EFF, the Ring Android app is “packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.”

    Specifically, the data is shared with Branch, AppsFlyer, MixPanel and Google’s Crashalytics. EFF’s investigation was able to uncover what data was being sent to each entity.

    Branch is a “deep linking” platform that receives several unique identifiers, “as well as your device’s local IP address, model, screen resolution, and DPI.”

    AppsFlyer is “a big data company focused on the mobile platform,” and receives information that includes unique identifiers, when Ring was installed, interactions with the “Neighbors” section and more. Even worse, AppsFlyer “receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.”

    MixPanel receives the most information, including “users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in.”

    It’s unknown what data is sent to Crashalytics, although it’s likely that’s the most benign of the data-sharing partnerships.

    The worst part is that, while all of these companies are listed in Ring’s third-party services list, the amount of data collection is not. As a result, there is no way for a customer to know how much data is being collected or what is being done with it, let alone have the option to opt out of it.

    Ring has been in the news recently for several high-profile security issues, including its cameras being hacked and a VICE investigation revealing an abysmal lack of basic security features. While both of these can be chalked up to errors or incompetence, this latest discovery is deeply disturbing because it speaks to how Ring is designed to function—namely as a way for the company to profit off of surveilling its own customers.

  • NJ Bans Clearview; Company Faces Potential Class-Action

    NJ Bans Clearview; Company Faces Potential Class-Action

    Facial recognition firm Clearview AI is facing a potential class-action lawsuit, while simultaneously being banned from being used by NJ police, according to separate reports by the New York Times (NYT) and CNET.

    The NYT is reporting that Clearview has found itself in hot water with the New Jersey attorney general over its main promotional video it was running on its website. The video showed Attorney General and two state troopers at a press conference detailing an operation to apprehend 19 men accused of trying to lure children for sex, an operation that Clearview took at least partial responsibility for.

    Mr. Grewal was not impressed with Clearview using his likeness in its promotional material, or in the potential legal and ethical issues the service poses.

    “Until this week, I had not heard of Clearview AI,” Mr. Grewal said in an interview. “I was troubled. The reporting raised questions about data privacy, about cybersecurity, about law enforcement security, about the integrity of our investigations.”

    Mr. Grewal was also concerned about the company sharing details of ongoing investigations.

    “I was surprised they used my image and the office to promote the product online,” Mr. Grewal continued, while also acknowledging that Clearview had been used to identify one of the suspects. “I was troubled they were sharing information about ongoing criminal prosecutions.”

    As a result of his concerns, Mr. Grewal has told state prosecutors in NJ’s 21 counties that police should not use the app.

    At the same time, CNET is reporting an individual has filed a lawsuit in the US District Court for the Northern District of Illinois East Division and is seeking class-action status.

    “Without obtaining any consent and without notice, Defendant Clearview used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever,” alleges the complaint. “Clearview used artificial intelligence algorithms to scan the facial geometry of each individual depicted in the images, a technique that violates multiple privacy laws.”

    It was only a matter of time before Clearview faced the fallout from its actions. It appears that fallout is happening sooner rather than later.

  • Troubles Mount For Clearview AI, Facial Recognition Firm

    Troubles Mount For Clearview AI, Facial Recognition Firm

    According to a report by The Verge, Clearview AI is facing challenges to both its credibility and the legality of the service it provides.

    On the heels of reports, originally covered by the New York Times, that Clearview AI has amassed more than three billion photos, scraped from social media platforms and millions of websites—and has incurred Twitter’s ire in the process—it appears the company has not been honest about its background, capabilities or the extent of its successes.

    A BuzzFeed report points out that Clearview AI’s predecessor program, Smartcheckr, was specifically marketed as being able to “provide voter ad microtargeting and ‘extreme opposition research’ to Paul Nehlen, a white nationalist who was running on an extremist platform to fill the Wisconsin congressional seat of the departing speaker of the House, Paul Ryan.”

    Further hurting the company’s credibility is an example it uses in its marketing, about an alleged terrorist that was apprehended in New York City after causing panic by disguising rice cookers as bombs. The company cites the case as one of thousands of instances in which it has aided law enforcement. The only problem is that the NYPD said they did not use Clearview in that case.

    “The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a spokesperson for the NYPD told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

    That last statement, regarding “lawfully possessed arrest photos,” is particularly stinging as the company is beginning to face legal pushback over its activities.

    New York Times journalist Kashmir Hill, who originally broke the story, cited the example of asking police officers she was interviewing to run her face through Clearview’s database. “And that’s when things got kooky,” Hill writes. “The officers said there were no results — which seemed strange because I have a lot of photos online — and later told me that the company called them after they ran my photo to tell them they shouldn’t speak to the media. The company wasn’t talking to me, but it was tracking who I was talking to.”

    Needless to say, such an Orwellian use of the technology is not sitting well with some lawmakers. According to The Verge, members of Congress are beginning to voice concerns, with Senator Ed Markey sending a letter to Clearview founder Ton-That demanding answers.

    “The ways in which this technology could be weaponized are vast and disturbing. Using Clearview’s technology, a criminal could easily find out where someone walking down the street lives or works. A foreign adversary could quickly gather information about targeted individuals for blackmail purposes,” writes Markey. “Clearview’s product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.”

    The Verge also cites a recent Twitter post by Senator Ron Wyden, one of the staunchest supporters of individual privacy, in which he comments on the above disturbing instance of Clearview monitoring Ms. Hill’s interactions with police officers.

    “It’s extremely troubling that this company may have monitored usage specifically to tamp down questions from journalists about the legality of their app. Everyday we witness a growing need for strong federal laws to protect Americans’ privacy.”

    —Ron Wyden (@RonWyden) January 19, 2020

    Ultimately, Clearview may well provide the impetus for lawmakers to craft a comprehensive, national-level privacy law, something even tech CEOs are calling for.

  • EU Considering A Five-Year Ban On Facial Recognition In Public

    EU Considering A Five-Year Ban On Facial Recognition In Public

    Politico is reporting that the European Union (EU) is considering banning facial recognition in public areas for up to five years.

    Facial recognition is quickly becoming the latest battleground in the fight over user privacy. Some countries, such as China, have embraced the technology and taken surveillance of citizens to an all-new level. The U.S. has waffled back and forth, rolling out facial recognition in sensitive areas—such as airports—but often making participation optional. However, the Department of Homeland Security recently made headlines with a proposal that would expand facial recognition checks at airports, making them mandatory for citizens and foreigners alike.

    The EU, however, may be preparing to take the strongest stand against facial recognition and toward protecting individual privacy. According to a draft document Politico obtained, the EU is looking to expand its already rigorous privacy laws with a “future regulatory framework could go further and include a time-limited ban on the use of facial recognition technology in public spaces.”

    The ban would cover facial recognition use by both public and private entities.

    “This would mean that the use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g. 3-5 years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed,” adds the document.

    As the debate about facial recognition continues, it will be interesting to see where the U.S. lands: whether it will emphasize protecting individual privacy like the EU, or emphasize surveillance like China.

  • ToTok Co-Creator Denies App Is A Tool For UAE Spying

    ToTok Co-Creator Denies App Is A Tool For UAE Spying

    ToTok was recently removed from both Apple and Google’s app stores over allegations it was being used by the United Arab Emirates government to spy on users. In an interview with the Associated Press, co-creator Giacomo Ziani defended the app and denied it was a tool for spying.

    ToTok was released only months ago, and quickly rose to become one of the most popular social apps. Helping drive its popularity was the fact that it was the only app offering internet calling that was allowed in the UAE. Competing apps, such as FaceTime, WhatsApp, Skype and others are not allowed.

    In a report by the New York Times—that was based on information from American officials who had access to classified intelligence—the app was accused of being a spying tool for the UAE to “track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.”

    Ziani, however, defended his creation and denied the allegations.

    “I was not aware, and I’m even not aware now of who was who, who was doing what in the past,” Ziani told the AP.

    Ziani attributed the allegations to professional jealousy, although he failed to provide any evidence to support his claim. It will be interesting to watch what happens with ToTok and whether Ziani is successful in getting the app reinstated on Apple and Google’s app stores.

    In the meantime, ToTok is a cautionary tale that illustrates the lengths some governments and organizations will go in order to spy on individuals.

  • Amazon and Ring Sued In Federal Court Over Failure to Secure Cameras

    Amazon and Ring Sued In Federal Court Over Failure to Secure Cameras

    TMZ is reporting that Ring and its parent company, Amazon, are being sued in federal court in California, claiming they have failed to protect users.

    Ring made headlines a couple of weeks ago when a number of cameras where hacked. In one particularly disturbing incident, a camera in an 8 year-old girl’s room was hacked, with the hacker talking to her, claiming to be her best friend. There have been other incidents as well, with a woman woken by a hacker shouting at her and a couple subjected to racist comments about their son.

    To make matters worse, VICE tested the Ring devices and found their security was abysmal. There was no way to see if anyone else was logged in to the camera, nor was there a log of who had accessed the device in the past. In other words, once a camera is hacked, there is virtually no way of knowing it has been compromised.

    The lawsuit’s plaintiff, John Baker Orange, seems to have a similar story as the other hacking incidents. He claims that “someone hacked into his outdoor security cameras and started commenting on his kids who were playing basketball … encouraging them to get closer to the camera.” If the claim is true, it could be the earliest known example of Rings being maliciously hacked, as Orange claims the incident occurred last July.

    For a company specializing in security hardware, failure to provide basic security measures is beyond abysmal—it is unforgivable. It’s a safe bet this won’t be the first lawsuit Ring and Amazon face.

  • Pentagon Warns Military Personnel Not to Use Home DNA Kits

    Pentagon Warns Military Personnel Not to Use Home DNA Kits

    NBC News is reporting that the Pentagon has told military personnel not to use home DNA testing kits.

    According to a memo NBC News obtained, “Under Secretary of Defense for Intelligence Joseph Kernan and James Stewart, acting Under Secretary of Defense for Personnel and Readiness, said that DNA testing companies were targeting military members with discounts and other undisclosed incentives.”

    The memo expressed concern that DNA companies’ policies may post a greater risk to military personnel than the general population. Inaccurate medical analysis impacting military medical disclosures, data being sold to third parties, data being used for surveillance and the possibility of tracking people without their consent were some of the specific concerns mentioned.

    Experts have for some time been warning about the privacy implications of home DNA testing kits and the companies behind them. The fact that the Pentagon is taking such a strong stand certainly adds weight to those concerns.

  • China Requiring Facial Recognition Scans For Mobile Users

    China Requiring Facial Recognition Scans For Mobile Users

    China is ramping up its attacks on privacy, with new rules due to take effect requiring all citizens to submit to facial recognition scans when registering for mobile service. The BBC is reporting the new rules were first announced in September and went into effect December 1.

    China has been working for years to eliminate online anonymity among its citizens, even requiring online platforms to verify users’ identities before they’re allowed to post content. These new regulations are an effort to “strengthen” the government surveillance system and give them a way to track mobile users.

    According to the BBC, “Jeffrey Ding, a researcher on Chinese artificial intelligence at Oxford University, said that one of China’s motivations for getting rid of anonymous phone numbers and internet accounts was to boost cyber-security and reduce internet fraud.

    “But another likely motivation, he said, was to better track the population: ‘It’s connected to a very centralised push to try to keep tabs on everyone, or that’s at least the ambition.’”

    This goal is much easier in a country like China, where the vast majority of citizens access the internet via their phones. China is already known as a surveillance state, where facial recognition is regularly used to track citizens. This latest move will only increase the government’s surveillance powers.

  • Microsoft Hires Attorney General Eric Holder To Audit AnyVision

    Microsoft Hires Attorney General Eric Holder To Audit AnyVision

    NBC News is reporting that Microsoft has hired Attorney General Eric Holder to investigate AnyVision, an Israeli-based facial recognition firm the company invested in.

    AnyVision creates facial recognition software in use by the Israeli military at border crossings. The software is used to log the faces of Palestinians entering Israel. However, according to NBC News, the software is also used to secretly surveil Palestinians throughout the West Bank.

    According to NBC News sources, AnyVision’s tech is at the heart of a secret military project, with one of those sources referring to it by the codename “Google Ayosh.” “Ayosh” refers to the West Bank and “Google” is a nod to the kind of powerful search capabilities Google is known for—although the search giant is not involved in the project. Google Ayosh was evidently so successful that it led to AnyVision winning Israel’s top defense prize in 2018.

    Microsoft invested $74 million Series A funding in AnyVision in June, through it’s venture capital arm, M12. In the wake of NBC News’ report, however, the company is concerned that AnyVision’s involvement in Google Ayosh may violate its ethical principles for the use of facial recognition: “fairness, transparency, accountability, nondiscrimination, notice and consent, and lawful surveillance.”

    Compliance with Microsoft’s facial recognition principles was included as part of the terms of the deal when Microsoft invested, giving them a right to perform the audit.

    When NBC News first reported on the surveillance allegations, a Microsoft spokesman said that, if true, “they would violate our facial recognition principles.”

    “If we discover any violation of our principles, we will end our relationship.”

    At the same time, AnyVision has denied the reports, stating: “All of our installations have been examined and confirmed against not only Microsoft’s ethical principles, but also our own internal rigorous approval process.”

    Whatever the case, Holder and a team of former federal prosecutors—currently working at law firm Covington & Burling—will investigate the allegations.