WebProNews

Category: CybersecurityUpdate

CybersecurityUpdate

  • Clearview AI Expanding Internationally—With Authoritarian Regimes

    Clearview AI Expanding Internationally—With Authoritarian Regimes

    In further proof that Clearview AI can’t be trusted, BuzzFeed News is reporting the facial recognition firm is planning on selling its services to authoritarian regimes.

    Clearview claims to have scraped over 3 billion photos from millions of websites, including the major social media platforms. The company then makes those photos available, in a searchable database, to hundreds of law enforcement agencies across the country.

    According to BuzzFeed, “a document obtained via a public records request reveals that Clearview has been touting a ‘rapid international expansion’ to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses.”

    Three of the countries are the United Arab Emirates, which is known for cracking down on dissidents, as well as Qatar and Singapore, both of which have far more restrictive human rights laws than Western countries.

    In an interview with BuzzFeed, Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, expressed concern about the implications of the software being used by oppressive regimes.

    “It’s deeply alarming that they would sell this technology in countries with such a terrible human rights track record, enabling potentially authoritarian behavior by other nations,” he said.

    Clearview CEO Hoan Ton-That has been defending his company amid growing scrutiny and concern over the legality and ethics of its behavior. The New Jersey Attorney General recently enacted a moratorium on police departments using the company’s service. Twitter, Facebook, Google and YouTube have sent cease-and-desist letters to Clearview. Now, as lawmakers increasingly turn their attention toward the company, it’s a safe bet this latest news will not help Clearview’s case.

  • Twitter Tackles Deceptive Information and Manipulated Media

    Twitter Tackles Deceptive Information and Manipulated Media

    Following Facebook and Reddit, Twitter has unveiled new rules governing how it will handle deceptive information and manipulated media.

    Social media platforms are increasingly under pressure to moderate misleading information on their platforms. With an upcoming election the pressure is even greater, as a single well-timed, misleading post could have profound repercussions. Following similar announcements from Facebook and Reddit, Twitter has announced its own policy to tackle the issue.

    The company detailed the new rule in a blog post:

    “You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.”

    In a chart the company uses, there are three prime factors that will determine how Twitter responds: media that is significantly and deceptively altered or fabricated; media that’s shared in a deceptive manner; and whether the media will cause serious harm or impact public safety.

    The company admits they may make mistakes as the policy starts being enforced on March 5, 2020.

    “This will be a challenge and we will make errors along the way — we appreciate the patience. However, we’re committed to doing this right. Updating our rules in public and with democratic participation will continue to be core to our approach.

    We’re working to serve the public conversation, and doing our work openly and with the people who use our service.”

  • Avast Caught Selling Detailed Browsing History to Marketers

    Avast Caught Selling Detailed Browsing History to Marketers

    Another day, another company abusing customer privacy. A joint investigation by PCMag and Motherboard has discovered that antivirus maker Avast, who also owns AVG, has been selling extremely detailed information about customer browsing histories to marketers.

    The company division responsible is Jumpshot, and it has “been offering access to user traffic from 100 million devices.” In a tweet the company sent last month to attract new clients, it promised to deliver “‘Every search. Every click. Every buy. On every site’ [emphasis Jumpshot’s,]” according to Motherboard.

    In fact, the level of detail the data provides is astounding, allowing clients to “view the individual clicks users are making on their browsing sessions, including the time down to the millisecond. And while the collected data is never linked to a person’s name, email or IP address, each user history is nevertheless assigned to an identifier called the device ID, which will persist unless the user uninstalls the Avast antivirus product.”

    The data is anonymized so that, in theory, it can’t be tied to an individual user. However, the device ID is where the trouble comes in. For example, all a retailer would need to do is compare the time stamp that correlates to a specific purchase against their records to identify the customer. It would then be a simple matter to use that device ID to build a complete—and completely identifiable—profile of that person. With their entire browsing history, the retailer would know everything about what sites they visit, their habits, what their interests are and who their friends are.

    According to PCMag, Jumpstart even offered different products tailored to delivering different subsets of information. For example, one product focused on search results, both the terms searched for and the results visited. Another product focused on tracking what videos people are watching on Facebook, Instagram and YouTube.

    The granularity is particularly disturbing in relation to a contract Jumpstart had with marketing provider Omnicom Media Group, to provide them the “All Clicks Feed.” The service provides “the URL string to each site visited, the referring URL, the timestamps down to the millisecond, along with the suspected age and gender of the user, which can inferred based on what sites the person is visiting.” While the device ID was stripped from the data for most companies that signed up for the All Clicks Feed, Omnicom Media Group was the exception, receiving the data with device IDs intact.

    Much of the collection occurred through the antivirus software’s browser extensions, and Avast has since stopped sharing the data it collects through those extensions. However, the company has not committed to delete the data it has already collected. The company can also still collect browsing history through its Avast and AVG antivirus software, on both desktop and mobile.

    That ambiguity has not gone over well with Senator Ron Wyden, a staunch privacy advocate. According to both PCMag and Motherboard, Wyden said in a statement that “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”

    The full read at either PCMag or Motherboard is fascinating and is another good reminder that nothing in life is free. Companies that offer a ‘free service’ are making their money somewhere—often at the expense of the customer.

  • New Ransomware Attacks Critical Infrastructure

    New Ransomware Attacks Critical Infrastructure

    Ars Technica is reporting on a new type of ransomware that tampers with and stops critical infrastructure software, such as that used by gas refineries, power grids and dams.

    Ransomware has become a multi-billion dollar plague, with some estimates placing the cost in 2019 at $7.5 billion. Hospitals, businesses, government agencies and universities have all been impacted. The usual M.O. for ransomware is to encrypt files on the target system and hold the files for ransom until the victim pays.

    One of the latest ransomware strains, dubbed Ekans, may have far more chilling implications. According to Ars Technica, in addition to the traditional methods Ekans employs “researchers at security firm Dragos found something else that has the potential to be more disruptive: code that actively seeks out and forcibly stops applications used in industrial control systems, which is usually abbreviated as ICS. Before starting file-encryption operations, the ransomware kills processes listed by process name in a hard-coded list within the encoded strings of the malware.”

    Fortunately, Ekans is relatively primitive and is likely to have minimal impact on ICS programs. As Ars Technica highlights, “Monday’s report described Ekans’ ICS targeting as minimal and crude because the malware simply kills various processes created by widely used ICS programs. That’s a key differentiator from ICS-targeting malware discovered over the past few years with the ability to do much more serious damage.”

    Even so, this is a disturbing escalation in the cybersecurity wars, one that is likely the beginning of a new breed of ransomware.

  • Google Accidentally Sent Video Backups to Strangers

    Google Accidentally Sent Video Backups to Strangers

    Google has admitted it accidentally sent videos from its Google Takeout archive service to the wrong users, according to 9to5Google.

    Google Takeout is a service that lets users download all their data to migrate to another service, or merely to use as a backup. According to the report, however, Google accidentally downloaded and saved some users’ videos to the wrong archives, essentially sending them to complete strangers.

    Google provided 9to5Google with the following statement:

    “We are notifying people about a bug that may have affected users who used Google Takeout to export their Google Photos content between November 21 and November 25. These users may have received either an incomplete archive, or videos—not photos—that were not theirs. We fixed the underlying issue and have conducted an in-depth analysis to help prevent this from ever happening again. We are very sorry this happened.”

    The company asks users who were impacted to delete their previous export and request another one.

  • Jigsaw Unveils Assembler Tool to Help Spot Deepfakes

    Jigsaw Unveils Assembler Tool to Help Spot Deepfakes

    Alphabet-owned company Jigsaw has unveiled a new tool called Assembler to help journalists spot doctored images and deepfakes, according to a blog post by CEO Jared Cohen.

    Deepfake images and videos are created using artificial intelligence, transposing one person’s likeness onto another’s body, making it appear someone is doing something they aren’t. Although still in the early stages of complexity, as deepfake technology progresses, experts fear it could have profound impacts on everything from personal scandals to the outcome of elections. For journalists, deepfakes and doctored images represent a threat to accuracy and journalistic integrity.

    As these kind of threats continue to emerge, Jigsaw “forecasts and confronts emerging threats, creating future-defining research and technology to keep our world safer,” including combating doctored images.

    “Jigsaw’s work requires forecasting the most urgent threats facing the internet, and wherever we traveled these past years — from Macedonia to Eastern Ukraine to the Philippines to Kenya and the United States — we observed an evolution in how disinformation was being used to manipulate elections, wage war, and disrupt civil society,” writes Cohen.

    Jigsaw is working with a select group of journalists and fact-checkers to test and improve Assembler before making it widely available.

    “Assembler is an early stage experimental platform advancing new detection technology to help fact-checkers and journalists identify manipulated media,” adds Cohen. “In addition, the platform creates a space where we can collaborate with other researchers who are developing detection technology. We built it to help advance the field of science, and to help provide journalists and fact-checkers with strong signals that, combined with their expertise, can help them judge if and where an image has been manipulated. With the help of a small number of global news providers and fact checking organizations including Agence France-Presse, Animal Politico, Code for Africa, Les Décodeurs du Monde, and Rappler, we’re testing how Assembler performs in real newsrooms and updating it based on its utility and tester feedback.”

    Assembler’s release coincides with Alphabet CEO Sundar Pichai expressing his belief that tech companies must be responsible for the technology they create, rather than simply unleashing tech and leaving others to figure out the ethical dilemmas.

  • Twitter Suffers Serious Security Incident: Usernames Matched to Phone Numbers

    Twitter Suffers Serious Security Incident: Usernames Matched to Phone Numbers

    Twitter has disclosed a serious security incident that allowed bad actors to link usernames with phone numbers.

    According to a blog post on the company’s privacy site, on December 24, 2019, Twitter “became aware that someone was using a large network of fake accounts to exploit our API and match usernames to phone numbers.”

    The company took immediate action to suspend the fake accounts but, upon further investigation, Twitter discovered additional accounts that may have been exploiting the API. The API in question allows users to find other people they know by using their phone number, provided the other person has the “Let people who have your phone number find you on Twitter” option turned on and have a phone number linked to their account. The fake accounts, however, misused the API to link phone numbers and usernames of accounts they had no connection to.

    Although the fake accounts’ IP addresses traced back to locations all around the globe, Twitter says there was an unusually high number that traced back to Iran, Israel, and Malaysia. As a result, Twitter says it’s “possible that some of these IP addresses may have ties to state-sponsored actors.”

    The company has changed how the API works to make sure this can’t be exploited in the future and apologized to its users for the incident.

  • FCC Finds Carriers Broke the Law by Selling Location Data

    FCC Finds Carriers Broke the Law by Selling Location Data

    The Federal Communications Commission (FCC) has found that wireless carriers violated federal law in selling customer location data to third-parties.

    FCC Chairman Ajit Pai has sent a letter to several lawmakers informing them of the results of the agency’s investigation. According to Engadget, in 2018 it first came to light that wireless carriers were selling “their customers’ real-time location data to aggregators, which then resold it to other companies or even gave it away.”

    Senator Ron Wyden brought to Chairman Pai’s attention the case of prison phone company Securus Technologies. Securus was buying wireless location data and providing “that information, via a self-service web portal, to the government for nothing more than the legal equivalent of a pinky promise. This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government conducts surveillance of Americans’ phone records, and needless exposes million of Americans to potential abuse and surveillance by the government.”

    Once the information came to light, Verizon was the first to promise to stop the practice, with the other three carriers following suit. Even so, the FCC launched an investigation to determine if federal laws were broken, and it appears they were.

    In the letters, Chairman Pai said:

    “Fulfilling the commitment I made in that letter, I wish to inform you that the FCC’s Enforcement Bureau has completed its extensive investigation and that it has concluded that one or more wireless carriers apparently violated federal law.

    “I am committed to ensuring that all entities subject to our jurisdiction comply with the Communications Act and the FCC’s rules, including those that protect consumers’ sensitive information, such as real-time location data. Accordingly, in the coming days, I intend to circulate to my fellow Commissioners for their consideration one or more Notice(s) of Apparent Liability for Forfeiture in connection with the apparent violation(s).”

    That last part, in particular, is an indication the FCC will take some form of action against the offending parties.

    It’s one thing when companies offering a free service look for ways to profit off of their customers’ data—with the proper disclosures, of course. It’s quite another when companies that already charge for the service they offer then proceed to double-dip by selling their customers’ data, let alone doing it without properly disclosing it. It’s nice to see the FCC agrees such behavior is illegal, not to mention unethical.

  • Google Paid Record-Breaking $6.5 Million In Bug Bounties In 2019

    Google Paid Record-Breaking $6.5 Million In Bug Bounties In 2019

    Google has announced it paid a record-breaking $6.5 million through its Vulnerability Reward Programs in 2019.

    Google’s VRPs rewards security researchers who find and report bugs so the company can address them. According to the company, 2019’s payout doubled what had been paid in any previous single year.

    Programs such as this have become a critical tool for companies in the fight against hackers and cybercriminals. By relying on security researchers and “white hat” hackers, companies hope to find security vulnerabilities and bugs before cyber criminals, or “black hats.”

    According to Google, “since 2010, we have expanded our VRPs to cover additional Google product areas, including Chrome, Android, and most recently Abuse. We’ve also expanded to cover popular third party apps on Google Play, helping identify and disclose vulnerabilities to impacted app developers. Since then we have paid out more than $21 million in rewards.”

    Although $6.5 million is a sizable amount, it pales in comparison to the cost of an exploited security vulnerability or data breach. In fact, according to a study sponsored by IBM Security, the average cost of a single data breach is $3.92 million. In view of the number of bug fixes that $6.5 million facilitated, it seems like quite the bargain.

  • Ring Uses Android Doorbell App to Surveil Customers

    Ring Uses Android Doorbell App to Surveil Customers

    The Electronic Frontier Foundation (EFF) has discovered that Ring’s Android doorbell camera app is being used to surveil customers.

    According to the EFF, the Ring Android app is “packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.”

    Specifically, the data is shared with Branch, AppsFlyer, MixPanel and Google’s Crashalytics. EFF’s investigation was able to uncover what data was being sent to each entity.

    Branch is a “deep linking” platform that receives several unique identifiers, “as well as your device’s local IP address, model, screen resolution, and DPI.”

    AppsFlyer is “a big data company focused on the mobile platform,” and receives information that includes unique identifiers, when Ring was installed, interactions with the “Neighbors” section and more. Even worse, AppsFlyer “receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.”

    MixPanel receives the most information, including “users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in.”

    It’s unknown what data is sent to Crashalytics, although it’s likely that’s the most benign of the data-sharing partnerships.

    The worst part is that, while all of these companies are listed in Ring’s third-party services list, the amount of data collection is not. As a result, there is no way for a customer to know how much data is being collected or what is being done with it, let alone have the option to opt out of it.

    Ring has been in the news recently for several high-profile security issues, including its cameras being hacked and a VICE investigation revealing an abysmal lack of basic security features. While both of these can be chalked up to errors or incompetence, this latest discovery is deeply disturbing because it speaks to how Ring is designed to function—namely as a way for the company to profit off of surveilling its own customers.

  • Intel Dealing With Zombieland Flaw For Third Time

    Intel Dealing With Zombieland Flaw For Third Time

    For the third time in a year, Intel is preparing to release a patch to address two microarchitectural data sampling (MDS) flaws, also known as Zombieland flaws.

    According to the company’s blog post, of these two new issues, one is considered low risk and the other medium. Both of them require authenticated local access, meaning a hacker should not be able to remotely exploit these flaws. These new issues are closely related to issues that were addressed in May and November 2019, as Intel has worked to progressively reduce the MDS vulnerability.

    “These issues are closely related to INTEL-SA-00233, released in November 2019, which addressed an issue called Transactional Synchronization Extensions (TSX) Asynchronous Abort, or TAA,” writes Jerry Bryant, Director of security communication in the Intel Platform Assurance and Security group. “At the time, we confirmed the possibility that some amount of data could still potentially be inferred through a side-channel and would be addressed in future microcode updates.

    “Since May 2019, starting with Microarchitectural Data Sampling (MDS), and then in November with TAA, we and our system software partners have released mitigations that have cumulatively and substantially reduced the overall attack surface for these types of issues. We continue to conduct research in this area – internally, and in conjunction with the external research community.”

    Intel has faced intense criticism from security researchers for its decision to address these vulnerabilities in phases, rather than taking an immediate, comprehensive approach to fixing them.

    In the meantime, the latest patch should be available “in the near future.”

  • NJ Bans Clearview; Company Faces Potential Class-Action

    NJ Bans Clearview; Company Faces Potential Class-Action

    Facial recognition firm Clearview AI is facing a potential class-action lawsuit, while simultaneously being banned from being used by NJ police, according to separate reports by the New York Times (NYT) and CNET.

    The NYT is reporting that Clearview has found itself in hot water with the New Jersey attorney general over its main promotional video it was running on its website. The video showed Attorney General and two state troopers at a press conference detailing an operation to apprehend 19 men accused of trying to lure children for sex, an operation that Clearview took at least partial responsibility for.

    Mr. Grewal was not impressed with Clearview using his likeness in its promotional material, or in the potential legal and ethical issues the service poses.

    “Until this week, I had not heard of Clearview AI,” Mr. Grewal said in an interview. “I was troubled. The reporting raised questions about data privacy, about cybersecurity, about law enforcement security, about the integrity of our investigations.”

    Mr. Grewal was also concerned about the company sharing details of ongoing investigations.

    “I was surprised they used my image and the office to promote the product online,” Mr. Grewal continued, while also acknowledging that Clearview had been used to identify one of the suspects. “I was troubled they were sharing information about ongoing criminal prosecutions.”

    As a result of his concerns, Mr. Grewal has told state prosecutors in NJ’s 21 counties that police should not use the app.

    At the same time, CNET is reporting an individual has filed a lawsuit in the US District Court for the Northern District of Illinois East Division and is seeking class-action status.

    “Without obtaining any consent and without notice, Defendant Clearview used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever,” alleges the complaint. “Clearview used artificial intelligence algorithms to scan the facial geometry of each individual depicted in the images, a technique that violates multiple privacy laws.”

    It was only a matter of time before Clearview faced the fallout from its actions. It appears that fallout is happening sooner rather than later.

  • Google Releases Cloud-Native Security Whitepaper

    Google Releases Cloud-Native Security Whitepaper

    In light of the ongoing ascendancy of clouding computing, Google has released a new whitepaper addressing cloud-native security.

    The whitepaper highlights a new approach to cloud security, emphasizing the unique needs of cloud-based systems. For example, in traditional computer security, tremendous emphasis is placed on perimeter security—keeping people out. As Google points out, however, that approach doesn’t work well with cloud-based systems.

    “It had become clear to us that a perimeter-based security model wasn’t secure enough,” the whitepaper reads. “If an attacker were to breach the perimeter, they would have free movement within the network. While we realized we needed stronger security controls throughout our infrastructure, we also wanted to make it easy for Google developers to write and deploy secure applications without having to implement security features themselves.

    “Moving from monolithic applications to distributed microservices deployed from containers using an orchestration system had tangible operational benefits: simpler management and scalability. This cloud-native architecture required a different security model with different tools to protect deployments aligned with the management and scalability benefits of microservices.”

    This new approach is called BeyondProd. BeyondProd builds on the principles outlined in a previous approach called BeyondCorp, and emphasizes zero trust between services.

    “In the same way that BeyondCorp helped us to evolve beyond a perimeter based security model, BeyondProd represents a similar leap forward in our approach to production security. The BeyondProd approach describes a cloud-native security architecture that assumes no trust between services, provides isolation between workloads, verifies that only centrally built applications are deployed, automates vulnerability management, and enforces strong access controls to critical data. The BeyondProd architecture led Google to innovate several new systems in order to meet these requirements.

    “All too often, security is ‘called in’ last一when the decision to migrate to a new architecture has already been made. By involving your security team early and focusing on the benefits of the new security model like simpler patch management and tighter access controls, a cloud-native architecture can provide significant benefits to both application development and security teams. When applying the security principles outlined in this paper to your cloud-native infrastructure, you can strengthen the deployment of your workloads, how your workloads’ communications are secured, and how they affect other workloads.”

    The full whitepaper is a must read for companies designing and deploying cloud-based systems and illustrates the unique approach cloud security demands.

  • Troubles Mount For Clearview AI, Facial Recognition Firm

    Troubles Mount For Clearview AI, Facial Recognition Firm

    According to a report by The Verge, Clearview AI is facing challenges to both its credibility and the legality of the service it provides.

    On the heels of reports, originally covered by the New York Times, that Clearview AI has amassed more than three billion photos, scraped from social media platforms and millions of websites—and has incurred Twitter’s ire in the process—it appears the company has not been honest about its background, capabilities or the extent of its successes.

    A BuzzFeed report points out that Clearview AI’s predecessor program, Smartcheckr, was specifically marketed as being able to “provide voter ad microtargeting and ‘extreme opposition research’ to Paul Nehlen, a white nationalist who was running on an extremist platform to fill the Wisconsin congressional seat of the departing speaker of the House, Paul Ryan.”

    Further hurting the company’s credibility is an example it uses in its marketing, about an alleged terrorist that was apprehended in New York City after causing panic by disguising rice cookers as bombs. The company cites the case as one of thousands of instances in which it has aided law enforcement. The only problem is that the NYPD said they did not use Clearview in that case.

    “The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a spokesperson for the NYPD told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

    That last statement, regarding “lawfully possessed arrest photos,” is particularly stinging as the company is beginning to face legal pushback over its activities.

    New York Times journalist Kashmir Hill, who originally broke the story, cited the example of asking police officers she was interviewing to run her face through Clearview’s database. “And that’s when things got kooky,” Hill writes. “The officers said there were no results — which seemed strange because I have a lot of photos online — and later told me that the company called them after they ran my photo to tell them they shouldn’t speak to the media. The company wasn’t talking to me, but it was tracking who I was talking to.”

    Needless to say, such an Orwellian use of the technology is not sitting well with some lawmakers. According to The Verge, members of Congress are beginning to voice concerns, with Senator Ed Markey sending a letter to Clearview founder Ton-That demanding answers.

    “The ways in which this technology could be weaponized are vast and disturbing. Using Clearview’s technology, a criminal could easily find out where someone walking down the street lives or works. A foreign adversary could quickly gather information about targeted individuals for blackmail purposes,” writes Markey. “Clearview’s product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.”

    The Verge also cites a recent Twitter post by Senator Ron Wyden, one of the staunchest supporters of individual privacy, in which he comments on the above disturbing instance of Clearview monitoring Ms. Hill’s interactions with police officers.

    “It’s extremely troubling that this company may have monitored usage specifically to tamp down questions from journalists about the legality of their app. Everyday we witness a growing need for strong federal laws to protect Americans’ privacy.”

    —Ron Wyden (@RonWyden) January 19, 2020

    Ultimately, Clearview may well provide the impetus for lawmakers to craft a comprehensive, national-level privacy law, something even tech CEOs are calling for.

  • Britain On the Verge Of Including Huawei

    Britain On the Verge Of Including Huawei

    Despite U.S. pressure to ban Huawei, the British government is preparing to include the company in its 5G plans, according to Reuters.

    The U.S. has already banned Huawei and has engaged in a campaign for its allies to do the same, citing allegations the telecommunications giant serves as a way for Beijing to spy on governments and companies around the world. There have even been threats of limiting intel sharing with countries that use Huawei, something that would have profound impacts on the U.S. and UK’s “special relationship.”

    According to Reuters, Britain is trying to thread the needle by considering an option that would include Huawei, but limit it “from the sensitive, data-heavy ‘core’ part of the network and restricted government systems, closely mirroring a provisional decision made last year under former Prime Minister Theresa May.”

    Any concession toward Huawei is likely to strain relations with the U.S. but, as Reuters points out, Britain is also trying to balance its trade with China and the warnings of telecom operators that banning Huawei would significantly raise the cost of 5G deployment.

    A final decision is expected next week.

  • PSA: Beware of FedEx Tracking Texting Scam

    PSA: Beware of FedEx Tracking Texting Scam

    Gizmodo is warning of a new scam involving text messages posing as FedEx tracking notifications.

    Android and iOS users (including this writer) have received text messages including what purports to be a FedEx tracking number and a link to set delivery preferences. Clicking on the link, however, goes to a fake Amazon listing and survey.

    As Gizmodo highlights, this is where the scam takes a turn. “If you proceed any further, the survey will then ask users for a range of personal information including their credit card information, which for anyone who hadn’t already started feeling suspicious, should set off serious alarms.

    “Apparently, by entering in your address and credit card number and agreeing to pay a shipping fee for your “prize,” you are also signing up for 14-day trial that turns into a $100 recurring subscription for a range of products, which you will continue to get billed for every month until you figure out how to cancel the payment.”

    One way to spot the scam is the alphanumeric nature of the supposed tracking numbers. FedEx tracking numbers are almost always exclusively numbers, whereas the fake ones include letters as well. Similarly, FedEx tracking numbers are 12 or 15 digits long, as opposed to the 10-digit fake ones.

    Police departments are warning citizens of the scam and encouraging individuals to check any tracking numbers they receive directly on FedEx’s website, rather than following a link in a text message.

  • The Company That Can End Privacy Just Ran Afoul of Twitter

    The Company That Can End Privacy Just Ran Afoul of Twitter

    Clearview AI, the company that made headlines last week for potentially ending privacy as we know it, has incurred the wrath of Twitter, according to The Seattle Times.

    New York Times journalist Kashmir Hill first reported on Clearview AI, a small, little-known startup that allows you to upload a photo and then compare it against a database of more than three billion photos the company has amassed. Clearview’s system will then show you “public photos of that person, along with links to where those photos appeared.”

    Clearview has built its database by scraping Twitter, Facebook, YouTube, Venmo and millions of other websites for photos of people, something that is blatantly against most companies’ terms of service. The database is so far beyond anything the government has that some 600 law enforcement agencies have begun using Clearview—without any public scrutiny or a legislative stance on the legality of what Clearview does.

    To make matters even worse, once a person’s photos or social media profile has been scraped and added to the database, there is currently no way to have the company remove it. The only recourse available to individuals is to change the privacy settings of their social media profiles to prevent search engines from accessing them. This will stop Clearview from scraping any additional photos from their profile but, again, it does nothing to address any photos they may already have.

    At least one company is taking a strong stand against Clearview, namely Twitter. The Seattle Times is reporting that Twitter has sent Clearview a cease-and-desist demanding it stop scraping their site and user profiles for “any reason.” The cease-and-desist further demands that Clearview delete any and all data it has already collected from Twitter.

    Clearview is a prime example of what Alphabet CEO Sundar Pichai was talking about, in an op-ed he published in the Financial Times, when he said tech companies needed to take responsibility for the technology they create, not just charge ahead because they can. Similarly, Salesforce co-CEO Keith Block recently said the U.S. needed a national privacy law similar to the EU’s GDPR. If Clearview doesn’t make a case for such regulation…nothing will.

    In the meantime, here’s to hoping every other company and website Clearview has scraped for photos takes as strong a stance as Twitter.

  • Microsoft Error Exposes Database Containing 250 Million Service Records

    Microsoft Error Exposes Database Containing 250 Million Service Records

    Microsoft has announced in a blog post that a database containing 250 million service records was left exposed due to a configuration error.

    According to security firm Comparitech, a “security research team led by Bob Diachenko uncovered five Elasticsearch servers, each of which contained an apparently identical set of the 250 million records. Diachenko immediately notified Microsoft upon discovering the exposed data, and Microsoft took swift action to secure it.”

    Diachenko is a well-known cybersecurity professional Comparitech collaborates with. Diachenko praised Microsoft’s quick response to his findings.

    “I immediately reported this to Microsoft and within 24 hours all servers were secured. I applaud the MS support team for responsiveness and quick turnaround on this despite New Year’s Eve.”

    Microsoft’s own investigation continued, leading to the blog post today detailing what went wrong.

    “Our investigation has determined that a change made to the database’s network security group on December 5, 2019 contained misconfigured security rules that enabled exposure of the data.”

    The company said that the vast majority of data had already been cleared of any identifiable personal information, although there was some data meeting specific criteria that may not have been redacted.

    “As part of Microsoft’s standard operating procedures, data stored in the support case analytics database is redacted using automated tools to remove personal information. Our investigation confirmed that the vast majority of records were cleared of personal information in accordance with our standard practices. In some scenarios, the data may have remained unredacted if it met specific conditions.”

    Most importantly, the company says it has found no evidence of any malicious use of the exposed database.

  • Salesforce Co-CEO Says U.S. Needs National Privacy Law

    Salesforce Co-CEO Says U.S. Needs National Privacy Law

    Salesforce co-CEO Keith Block has come out in favor of a national privacy law, according to CNBC.

    Privacy is becoming one of the biggest battlegrounds for companies, governments and individuals alike. The U.S., however, does not have a comprehensive privacy law to outline what companies can and cannot do with individual data, or what rights individuals have to protect their privacy.

    In contrast, the European Union’s (EU) General Data Protection Regulation (GDPR) went into effect in 2018 and provides comprehensive privacy protections and gives consumers rights over their own data. Similarly, the California Consumer Privacy Act (CCPA) went into effect January 1, and provides similar protections. Although companies, such as Microsoft and Mozilla, have expanded GDPR and CCPA protections to all of their customers, there are far more companies that have not, and have no intention of doing so.

    At a panel discussion at the World Economic Forum (WEF), Keith Block said the U.S. needs its own version of the GDPR.

    “You have to applaud, for example, the European Union for coming up with GDPR and hopefully there will be a GDPR 2.0,” said Block.

    “There is no question there needs to be some sort of regulation in the United States. It would be terrific if we had a national data privacy law; instead we have privacy by zipcode, which is not a good outcome,” he said.

    As the issue continues to impact individuals and organizations, it will be interesting to see if the U.S. follow’s the EU’s lead.

  • FBI Seizes Site With 12 Billion Stolen User Names & Passwords

    FBI Seizes Site With 12 Billion Stolen User Names & Passwords

    In an international operation, the FBI has seized a website containing user data from over 10,000 data breaches, according to Engadget.

    According to the report, the FBI seized WeLeakInfo, a website that contained personal data taken from 10,300 data breaches. Engadget says the “site promoted itself as a legitimate way to perform security research, even though it offered phone numbers, IP addresses and other personal info that’s protected by law.”

    Even worse, the information was organized in a searchable database that could be accessed through subscriptions that started as cheap as $2. With just an email address, someone could find any associated names, passwords, phone numbers and IP addresses. Engadget recommends individuals check “security expert Troy Hunt’s excellent haveibeenpawned.com site” to see if their information has been stolen.

    As more and more services, platforms and devices become interconnected, it’s important for users to periodically change their passwords, and to use unique passwords for different services. If a person uses the same password across multiple services, it only takes a single breach to expose their data in multiple locations.

  • PSA: Cybercriminals Preying On Nest Users With ‘Sextortion’ Scheme

    PSA: Cybercriminals Preying On Nest Users With ‘Sextortion’ Scheme

    Following reports of connected security cameras, such as Ring and Nest, being targeted by hackers, scammers are preying on people’s fears with a “sextortion” scheme, according to CNBC.

    The scam relies on “social engineering,” or the ability to convince an unsuspecting victim do something they wouldn’t normally do, through the use of charm, guilt, shame or authority. The scammer has usually done enough research and has enough information and half-truths to make the scam seem credible.

    According to CNBC, IT security firm Mimecast saw “a huge spike in the new tactic, with more than 1,600 scam emails intercepted in just a two-day period from Jan. 2 to Jan. 3.”

    When describing this particular scam Kiri Addison, head of data science, said “this one is a bit different. It stood out, because it’s really convoluted in a way. It starts out with a single email saying ‘we’ve got some nude photos of you.’”

    The email will include a link to a website showing Nest footage from an innocent area the person could have visited, such as a bar or restaurant. The idea is to make the person believe they’ve been monitored and recorded over a long period of time, in any number of situations, making it more believable they may have been recorded in a compromising position.

    Ultimately, the victim is walked through the process of establishing a bitcoin wallet and paying the scammers $500 to keep their photos and videos from being released on porn sites. It’s important to understand there aren’t actually any photos or videos.

    As CNBC points out, “if you receive a sextortion email, the best thing you can do is ignore it.

    “Although internet-connected cameras and smartphones can be hacked, this is a very rare event. It’s practically non-existent for such a hack to be combined with an extortion demand.”