WebProNews

Tag: Privacy

  • Facebook Is Encrypting Links to Bypass Browser Privacy Settings

    Facebook Is Encrypting Links to Bypass Browser Privacy Settings

    Facebook is at it again, encrypting its URLs in an effort to bypass the privacy protections afforded by Brave and Firefox.

    According to Ghacks, the issue stems from changes Firefox and Brave made to strip out tracking parameters from URLs. Tracking parameters are trailing characters in a URL that provide no benefit to the user, designed to help the website track them. To get around Brave and Firefox stripping out the tracking parameters, Facebook is working to encrypt its URLs.

    Facebook is specifically encrypting the URLS rather than simply changing their parameters in an effort to prevent the browser makers from adapting. URL stripping is based on known tracking parameters. Once Facebook changes the parameters, the browser makers would simply adapt and filter out the new ones. Encrypting the URLs makes it exponentially more difficult, if not impossible, for the browser makers to adapt.

    Facebook has a long and well-established reputation for ignoring privacy and going to great lengths to collect any and all information it can on users. Its latest effort completely ignores users’ preferences by bypassing protections they have opted to use. What’s worse, the company has shown the rest of the industry how to bypass this protection.

  • Facebook Is Testing Multiple Profiles per Account

    Facebook Is Testing Multiple Profiles per Account

    In its quest to remain relevant in the face of newer, hipper rivals, Facebook is testing the ability to have multiple profiles per account.

    Facebook may be the 800-pound gorilla in the social media space, but it is increasingly pressured by newer rivals that are capturing the younger market. According to BGR, Facebook is testing the ability to have up to five separate profiles per account.

    The idea behind the feature is to give users an easy way to fine-tune who they share information with. For example, a user could have a work profile for colleagues but share more intimate information with their friends or family profile.

    The new feature will not impact Facebook’s user statistics, as the company will only count accounts, not profiles. Nonetheless, fine-tuning the data people share on Facebook and with whom they share it seems like a wonderful way for Facebook to learn more about its users and a terrible idea for end-user privacy.

  • Leaked Meetings Show TikTok Shares US User Data With China

    Leaked Meetings Show TikTok Shares US User Data With China

    TikTok is once again under fire for its privacy policies, with leaked meeting recordings showing the company is reneging on a major promise.

    Leaked recordings of some 80 internal TikTok meetings have once again blown the lid off TikTok’s privacy claims, showing the company’s engineers in China had access to US user data at least as recently as January 2022.

    “Everything is seen in China,” said a member of TikTok’s Trust and Safety department in a September 2021 meeting, according to BuzzFeed News, the outlet that broke the story.

    TikTok has been the social media star of the last couple of years, becoming one of China’s biggest tech hits on explosive growth. Despite its growth, the platform has consistently come under scrutiny for its privacy practices. The company has run afoul of EU privacy laws, been accused of violating child privacy on multiple occasions, found sending job applicant data to China, and encouraged its moderators to censor content from “users deemed too ugly, poor, or disabled for the platform.”

    Read more: Multiple States Investigate TikTok’s Impact on Children

    Amazingly, through all of this, the company had maintained that it does not share US user data with China, even swearing in testimony before a Senate hearing that it was only a US team that decided where US user data was handled. According to BuzzFeed News, nothing could be further from the truth.

    After reviewing the meeting records, BuzzFeed News found “14 statements from nine different TikTok employees indicating that engineers in China had access to US data between September 2021 and January 2022, at the very least.”

    Despite the TikTok executive’s Senate testimony about the “world-renowned, US-based security team” that decided how data was handled, the meeting recordings show that US staff had neither the know-how or the permission to handle the data on their own, forcing them to turn to their counterparts in China.

    This latest revelation will likely lead to further investigations and possible sanctions against the company, especially since the evidence suggests the company’s executive lied to the Senate.

    While TikTok narrowly managed to avoid being banned from the US or forced to sell its US assets, under the Trump administration, its luck may be on the verge of running out.

  • $5.5 Million — That’s the Price Americans Want for Their Search History

    $5.5 Million — That’s the Price Americans Want for Their Search History

    A new report demonstrates Americans may value their search history a little more than some companies may have expected, putting a $5.5 million price tag on it.

    Google and Facebook have led to a massive erosion of privacy, as users have been willing to trade away their personal information for the free services the companies, and many others, provided. In recent years, however, users have begun pushing back, valuing their privacy more and more. According to a survey by SimpleTexting, Americans are finally valuing their search, putting a price tag of $5.5 million on it.

    SimpleTexting surveyed 3,000 US participants to see just how much they valued their search results and what they would give up in exchange for making it public. The results were surprising.

    • Users would require $5.5 million in exchange for making their search history public.
    • 1 in 4 users would give up their car for a year instead of making their search public.
    • Nearly 3 in 10 would rather give up their smartphone for a year.
    • Nearly 7 in 10 would rather give up alcohol for a year.
    • 2 in 5 would rather give all streaming services for a year.
    • More than 1 in 3 would give up rather give up sex for a year.

    The survey is good news for privacy advocates and hopefully indicates increased awareness of the importance of privacy.

  • ExpressVPN Removes Its Servers From India

    ExpressVPN Removes Its Servers From India

    ExpressVPN has removed its servers from India in response to legislation that requires VPNs to track and log a significant amount of user data.

    India recently passed the Cyber Security Directions legislation, requiring VPN providers to log customer names, IP addresses, email address, financial transactions, and more. The government has taken a hard line, insisting VPN providers must play ball or leave the country. ExpressVPN is opting for the latter choice, announcing it is shutting down its servers in the country.

    ExpressVPN announced its course of action in a blog post, saying it was a “very straightforward decision to remove” its servers from India. At the same time, the company plans to continue supporting its Indian customers.

    “Rest assured, our users will still be able to connect to VPN servers that will give them Indian IP addresses and allow them to access the internet as if they were located in India,” the company writes. “These ‘virtual’ India servers will instead be physically located in Singapore and the UK.

    “In terms of the user experience, there is minimal difference. For anyone wanting to connect to an Indian server, simply select the VPN server location ‘India (via Singapore)’ or ‘India (via UK).’”

    By giving Indian users the ability to use servers outside the country, ExpressVPN can provide the privacy and security its users expect while remaining outside the reach of India’s new law. The company makes it clear it has no intention of ever complying with Cyber Security Directions.

    ExpressVPN refuses to participate in the Indian government’s attempts to limit internet freedom. As a company focused on protecting privacy and freedom of expression online, we will continue to fight to keep users connected to the open and free internet with privacy and security, no matter where they are located.

  • Researcher Discovers DuckDuckGo Allows Some Microsoft Trackers

    Researcher Discovers DuckDuckGo Allows Some Microsoft Trackers

    DuckDuckGo is receiving criticism for the terms of a deal with Microsoft that has resulted in some Microsoft trackers being whitelisted.

    DDG has made a name for itself as a privacy-first company, building a search engine, browser extensions, and web browsers around the premise of protecting user privacy. The company is one of the few that truly makes an effort to protect user privacy and data. Unfortunately, its terms with Microsoft have caused some concern.

    Unlike Google, Bing, or Brave, DDG gets its search results from other engines, with the bulk of them coming from Bing. The company has long claimed to strip out trackers from the search results it provides, although clicking an ad from Microsoft in the search results is handled differently. DDG has never made a secret of the fact that clicking on those ads sends a user’s IP address to Microsoft, since the user is leaving DDG and entering Microsoft’s space.

    Unfortunately, DDG had not been able to disclose the terms of the deal that whitelisted some Microsoft trackers, due to a confidentiality clause in the agreement between the two companies. Security researcher Zach Edwards first made the discovery and tweeted about it:

    Sometimes you find something so disturbing during an audit, you’ve gotta check/recheck because you assume that *something* must be broken in the test. But I’m confident now. The new @DuckDuckGo browsers for iOS/Android don’t block Microsoft data flows, for LinkedIn or Bing.

    — Zach Edwards (@thezedwards), May 23, 2022

    Ironically, DDG doesn’t even block Microsoft’s data trackers on Workplace.com, a Facebook-owned domain that it brags about blocking Facebook’s trackers on.

    Needless to say, DDG CEO Gabriel Weinberg is doing his best to put out the fire:

    We’ve been working tirelessly behind the scenes to change these requirements, though our syndication agreement also has a confidentially provision that prevents disclosing details. Again, we expect to have an update soon that will include more third-party Microsoft protection.

    — Gabriel Weinberg (@yegg), May 23, 2022

    Of course, Weinberg might not have to put out so big a fire if his company had disclosed this issue first, rather than waiting until it was uncovered by a security researcher.

    In the meantime, Shivan Kaul Sahib, Privacy Engineer for Brave, highlighted the inherent conflict of interest for a company that relies on the good graces of another company making money off of ad trackers.

    This is shocking. DuckDuckGo has a search deal with Microsoft which prevents them from blocking MS trackers. And they can’t talk about it! This is why privacy products that are beholden to giant corporations can never deliver true privacy; the business model just doesn’t work.

    — Shivan Kaul Sahib (@shivan_kaul), May 23, 2022

    Speaking of Brave, the company is one of the only ones on the market that provides a truly independent alternative to Google and Bing. The company bought Tailcat, allowing it to build its own search engine that relies on a completely independent web index. This keeps Brave from being beholden to Microsoft, Google, or any other company.

    With a privacy-focused browser and a truly independent search engine, Brave is quickly establishing itself as a much better privacy solution than DDG.

    In the meantime, here is a statement from Weinberg that was provided to WPN:

    “We have always been extremely careful to never promise anonymity when browsing, because that frankly isn’t possible given how quickly trackers change how they work to evade protections and the tools we currently offer. When most other browsers on the market talk about tracking protection they are usually referring to 3rd-party cookie protection and fingerprinting protection, and our browsers for iOS, Android, and our new Mac beta, impose these restrictions on third-party tracking scripts, including those from Microsoft. 

    What we’re talking about here is an above-and-beyond protection that most browsers don’t even attempt to do — that is, blocking third-party tracking scripts before they load on 3rd party websites. Because we’re doing this where we can, users are still getting significantly more privacy protection with DuckDuckGo than they would using Safari, Firefox and other browsers. This blog post we published gets into the real benefits users enjoy from this approach, like faster load times (46% average decrease) and less data transferred (34% average decrease). Our goal has always been to provide the most privacy we can in one download, by default without any complicated settings.” 

    “I understand this is all rather confusing because it is a search syndication contract that is preventing us from doing a non-search thing. That’s because our product is a bundle of multiple privacy protections, and this is a distribution requirement imposed on us as part of the search syndication agreement that helps us privately use some Bing results to provide you with better private search results overall. While a lot of what you see on our results page privately incorporates content from other sources, including our own indexes (e.g., Wikipedia, Local listings, Sports, etc.), we source most of our traditional links and images privately from Bing (though because of other search technology our link and image results still may look different). Really only two companies (Google and Microsoft) have a high-quality global web link index (because I believe it costs upwards of a billion dollars a year to do), and so literally every other global search engine needs to bootstrap with one or both of them to provide a mainstream search product. The same is true for maps btw — only the biggest companies can similarly afford to put satellites up and send ground cars to take streetview pictures of every neighborhood.

    Anyway, I hope this provides some helpful context. Taking a step back, I know our product is not perfect and will never be. Nothing can provide 100% protection. And we face many constraints: platform constraints (we can’t offer all protections on every platform do to limited APIs or other restrictions), limited contractual constraints (like in this case), breakage constraints (blocking some things totally breaks web experiences), and of course the evolving tracking arms race that we constantly work to keep ahead of. That’s why we have always been extremely careful to never promise anonymity when browsing outside our search engine, because that frankly isn’t possible. We’re also working on updates to our app store descriptions to make this more clear. Holistically though I believe what we offer is the best thing out there for mainstream users who want simple privacy protection without breaking things, and that is our product vision.”

    Updated 5/25/22: Edited for clarity and to add Gabriel Weinberg’s statement.

  • The UK Has Fined Clearview AI $9.4 Million

    The UK Has Fined Clearview AI $9.4 Million

    The hits keep on coming for Clearview AI, with the UK’s privacy watchdog fining the company $9.4 million and demanding it delete its data on UK residents.

    Clearview AI is the company that took privacy-invading facial recognition to depths previously unheard of, proudly promising to deliver a more comprehensive surveillance system than China. The company scraped images from social media and countless other sites, building a massive database it claimed was only for government and law enforcement use. Those claims proved untrue, with the company being about as irresponsible with its product as one would expect, based on its shady practices.

    After a string of legal setbacks, the UK has dealt the company another one, fining it millions and ordering it to stop collecting and using the images and data of UK residents, according to ZDNet. The Information Commissioner’s Office (ICO) engaged in a two-year investigation of Clearview, in cooperation with the Office of Australian Information Commissioner.

    The investigation concluded that the company illegally obtained residents’ photos without proper disclosure, had no legal basis for collecting the photos, didn’t take the proper precautions with the data it collected, and was ultimately in violation of the GDPR.

    “Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images,” said John Edwards, UK Information Commissioner.

    “The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.

    “People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement.”

    Hopefully the company continues to face these kind of legal setbacks.

  • VPN Providers May Be Forced to Pull Out of India

    VPN Providers May Be Forced to Pull Out of India

    VPN providers may be forced to pull out of the Indian market over a new law that undermines the privacy VPNs offer.

    India passed the Cyber Security Directions, a directive that requires VPN providers to keep records of customer names, IP addresses, email address, financial transactions, and more for a period of five years. India has now signaled there will be no tolerance for companies that refuse to comply, according to TechCrunch.

    Numerous companies have expressed concern over the laws, especially VPN providers that specifically guarantee anonymity. Many, such as Mullvad, NordVPN, ExpressVPN, ProtonVPN, and others guarantee their customers a service that doesn’t track them or keep the kind of logs the Indian government wants.

    “The new Indian VPN regulations are an assault on privacy and threaten to put citizens under a microscope of surveillance. We remain committed to our no-logs policy,” said ProtonVPN.

    Rajeev Chandrasekhar, the junior IT minister of India, told TechCrunch that VPN providers who conceal who uses their services “will have to pull out.”

    The only services exempted are corporate and enterprise VPNs. The new directive goes into effect for everyone else in June.

  • Top Websites Capture Email and Passwords — Without You Clicking ‘Submit’

    Top Websites Capture Email and Passwords — Without You Clicking ‘Submit’

    New research shows that some of the world’s top websites collect data — including emails and passwords — from forms even if the user does not click the ‘Submit’ button.

    Submission forms are nearly as old as the internet itself, providing a way for individuals to create accounts, sign in to those accounts, join mailing lists and more. The Submit button is a critical part of those forms, with an implied agreement that data will not be captured until it is clicked. Unfortunately, some of the top websites are collecting users’ data anyway, without the proper consent.

    According to researchers from KU Leuven (Leuven, Belgium), Radboud University, and University of Lausanne, “users’ email addresses are exfiltrated to tracking, marketing and analytics domains before form submission and before giving consent on 1,844 websites when visited from the EU and 2,950 when visited from the US.”

    Interestingly, some 52 websites used third-party session replay scripts to capture passwords as well. Fortunately, all 52 rectified that specific problem when notified.

    Not surprisingly, social media sites were some of the worst offenders, with both Meta and TikTok capturing hashed personal information from forms regardless of whether the user clicked Submit. Obviously the data collection occurred without the user’s consent.

    Below is a list of some of the top sites that leaked email addresses to tracker domains (although some of these have since corrected the issue):

    • businessinsider.com
    • usatoday.com
    • foxnews.com
    • trello.com
    • independent.co.uk
    • theverge.com
    • shopify
    • marriot
    • newsweek
    • codecademy.com
    • azcentral.com

    “If there’s a Submit button on a form, the reasonable expectation is that it does something—that it will submit your data when you click it,” Güneş Acar, a professor and Radboud University researcher, and leader in the study, told Ars Technica. “We were super surprised by these results. We thought maybe we were going to find a few hundred websites where your email is collected before you submit, but this exceeded our expectations by far.”

    “The privacy risks for users are that they will be tracked even more efficiently; they can be tracked across different websites, across different sessions, across mobile and desktop,” Acar added. “An email address is such a useful identifier for tracking, because it’s global, it’s unique, it’s constant. You can’t clear it like you clear your cookies. It’s a very powerful identifier.”

    The researchers have created LeakInspector, a Firefox extension that will help detect when a form is collecting data without consent. Users concerned with their privacy should download the extension immediately.

  • EU Proposes Most Privacy-Invasive Measure Yet to Tackle Child Abuse

    EU Proposes Most Privacy-Invasive Measure Yet to Tackle Child Abuse

    The European Union (EU) has proposed a new set of rules to tackle child abuse, rules that are being criticized as the most invasive “ever deployed outside of China and the USSR.”

    Governments and companies worldwide are grappling with how to protect children online. The EU has unveiled a new proposal that critics are almost universally panning, one that even the EU acknowledges is “most intrusive.”

    The EU’s proposal involves forcing companies to search all text messages and communications, including private, encrypted ones, in an effort to find and flag potential “grooming” on the part of child predators, as well as other CSAM (child sexual abuse material). Below is the EU’s description of the requirement (bold theirs):

    Detecting grooming would have a positive impact on the fundamental rights of potential victims by contributing to the prevention of abuse. At the same time, the detection process would be the most intrusive one for users (compared to the detection of known and new CSAM) since it would involve searching text, including in interpersonal communications, as the most important vector for grooming. On the one hand, such searches have to be considered as necessary to combat grooming since the service provider is the only entity able to detect it. Automatic detection tools have acquired a high degree of accuracy, and indicators are becoming more reliable with time as the algorithms learn, following human review. On the other hand, the detection of patterns in text-based communications may be more invasive into users’ rights than the analysis of an image or a video to detect CSAM, given the difference in the types of communications at issue and the mandatory human review of the online exchanges flagged as possible grooming by the tool.

    Matthew Green, cryptography professor at Harvard, highlighted exactly why this proposal is so intrusive in a series of tweets:

    “Let me be clear what that means: to detect grooming’ is not simply searching for known CSAM. It isn’t using AI to detect new CSAM, which is also on the table. It’s running algorithms reading your actual text messages to figure out what you’re saying, at scale.” —Matthew Green (@matthew_d_green), May 10, 2022

    “It is potentially going to do this on encrypted messages that should be private. It won’t be good, and it won’t be smart, and it will make mistakes. But what’s terrifying is that once you open up ‘machines reading your text messages’ for any purpose, there are no limits.” — Matthew Green (@matthew_d_green), May 10, 2022

    Green goes on to describe the proposal as “the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.”

    There’s no denying that CSAM and child exploitation is a problem, and an abhorrent one at that. Tackling it requires finding a balance between the various factors involved. Unfortunately, it’s a balance that is difficult to achieve, as the very technologies journalists, activists, and other endangered individuals rely on to keep them safe are the same technologies predators use to exploit children.

    The EU’s latest proposal, while giving lip-service to balance, is being accused of throwing balance out the window. What’s more, it may be the only similarly proposed legislation that doesn’t even attempt to hide its privacy implications. While many proposals try to falsely claim it’s possible to protect user privacy while implementing surveillance measures, the EU is plainly stating these measures are intrusive, especially the measures aimed at detecting new CSAM material.

    This option would represent a higher impact on providers’ freedom to conduct a business and more interference into users’ right to privacy, personal data protection and freedom of expression

    The EU also acknowledges that these measures are not as reliable as the measures employed to detect known CSAM.

    However, given that accuracy levels of current tools, while still being well above 90%, are lower than for the detection of known CSAM, human confirmation is essential.

    As Green points out, this opens the door for false positives, and host of other problems. What’s more, once deployed, there is nothing to prevent the technology from being used to detect other kinds of content other than existing policy — and policies change. An oppressive regime could easily repurpose the technology to scan for anything it views as a challenge to its authority or the status quo.

    The EU has traditionally been a bastion of user privacy, affording its citizens much better protection than the US, let alone China. This new legislation may single-handedly undo that reputation.

  • Clearview AI Dealt Major Blow in Court

    Clearview AI Dealt Major Blow in Court

    Clearview AI was dealt a major blow in court, agreeing to completely revamp its business within the US.

    Clearview achieved notoriety when it was discovered the firm was scraping popular websites and social media platforms for photos that it used to build a massive facial recognition database. Not only was the company’s actions against the policies of the websites it scraped, but its entire business model raised major privacy concerns among consumers and lawmakers alike. According to The Seattle Times, the company has now settled a lawsuit in Illinois, agreeing to stop selling its services to private parties within the US.

    Despite initially claiming it would only sell its service to law enforcement and other government agencies, Clearview was found to be playing fast and loose with who could access its database. The company also struck deals with authoritarian regimes, and has the stated goal having more surveillance than China.

    As part of its settlement, Clearview agrees to permanently stop selling to private parties within the US, and will suspend sales to any Illinois-based state government agencies or police departments for five years. The company’s contracts with federal agencies are unaffected.

    The settlement is good news for privacy advocates, and helps restrain one of the sleaziest businesses on the market. There’s still another case pending before a federal judge in Illinois that will hopefully bring further restrictions.

  • Privacy-Focused Tech Companies Call for Ban on ‘Surveillance-Based Advertising’

    Privacy-Focused Tech Companies Call for Ban on ‘Surveillance-Based Advertising’

    A group of tech companies with a history of protecting user privacy is calling for a ban on “surveillance-based advertising.”

    Mojeek, along with DuckDuckGo, Ecosia, StartPage, Fastmail, Proton Technologies and others have written a letter calling on the US, UK, EU and Australia to take action against the dominant form of online advertising. Mojeek is a UK-based search engine that has not tracked users since its inception, and holds the distinction of being the first privacy-oriented search engine. Similarly, the other companies on the list have a long history of protecting user privacy.

    The companies make the case in their open letter that surveillance advertising, commonly called “personalization,” is a threat to consumers, businesses and democracies. The companies also stand as examples that prove it’s possible to build a profitable business without exploiting consumers.

    We are a group of businesses who write to you today to show our support to this initiative. We represent small, medium and large businesses who all believe -and demonstrate on a daily basis -that it is possible to run profitable companies without exploiting the privacy of individuals.

    The companies emphasize they are not anti-advertising, they simply want the industry to use technologies and methods that don’t involve invading the privacy of users.

    Although we recognize that advertising is an important source of revenue for content creators and publishers online, this does not justify the massive commercial surveillance systems set up in attempts to “show the right ad to the right people”.

    Other forms of advertising technologies exist, which do not depend on spying on consumers, and alternative models can be implemented without significantly affecting revenue. On the contrary – and that we can attest to – businesses can thrive without privacy-invasive practices.

  • Brave and DuckDuckGo Push Back Against Google AMP

    Brave and DuckDuckGo Push Back Against Google AMP

    Brave and DuckDuckGo are pushing back against Google’s Accelerated Mobile Pages (AMP), bypassing the technology in their browsers and apps.

    AMP is a framework developed and deployed by Google under the guise of helping webpages load faster, especially for mobile devices. When a user clicks on a search result, Google essentially pre-loads the web content, optimizes it, and then presents it to the user, with no visual indication the page is being served from Google’s servers instead of the publisher’s. Both Brave and DuckDuckGo’s web browsers will now work to de-AMP web pages, serving up the publisher’s original site instead of Google’s AMP version.

    Brave outlines their approach in a blog post:

    Brave will protect users from AMP in several ways. Where possible, De-AMP will rewrite links and URLs to prevent users from visiting AMP pages altogether. And in cases where that is not possible, Brave will watch as pages are being fetched and redirect users away from AMP pages before the page is even rendered, preventing AMP/Google code from being loaded and executed.

    DuckDuckGo is taking similar measures with their web browser, as well as all of their apps and browser extensions:

    NEW: our apps & extensions now protect against Google AMP tracking. When you load or share a Google AMP page anywhere from DuckDuckGo apps (iOS/Android/Mac) or extensions (Firefox/Chrome), the original publisher’s webpage will be used in place of the Google AMP version.

    DuckDuckGo (@DuckDuckGo), April 19, 2022

    There are a number of reasons why both companies are pushing back and adopting this approach. First and foremost, privacy is one of the biggest casualties of AMP, as Brave points out:

    AMP gives Google an even broader view of which pages people view on the Web, and how people interact with them. AMP encourages developers to more tightly integrate with Google servers and systems, and penalizes publishers with decreased search rankings and placements if they don’t, further allowing Google to track and profile users.

    AMP is also a security nightmare since users aren’t clearly informed that they are browsing a website from Google’s servers, and not from the site’s publishers. This, in turn, gives Google far more control, increasing their monopolization of the web. As Brave points out, AMP doesn’t even deliver the performance improvements Google touts.

    It’s hard not to see AMP, and its upcoming successor, as an unabashed attempt by Google to further control the future of the web. Thankfully, companies like Brave and DuckDuckGo are continuing to fight back.

  • Tim Cook: ‘We’re Not Against Digital Advertising’

    Tim Cook: ‘We’re Not Against Digital Advertising’

    Tim Cook has set the record straight that Apple is not against digital advertising, it simply wants to give consumers more control.

    Apple is at odds with the advertising industry over changes to iOS. Apple recently began enforcing privacy labels, forcing app developers to disclose what user information they collect and track. iOS will soon include App Track Transparency (ATT), forcing apps to ask users for permission to track them.

    Unfortunately, the advertising industry seems to suffer the belief that it has an inalienable right to track users, and build detailed profiles of them, with or without their permission. Thankfully, Apple is opposed to that view, and holds to the idea that people should be able to decide for themselves whether they are tracked and profiled — not the have the decision made for them by advertisers.

    In an interview with the Toronto Sun, via AppleInsider, CEO Tim Cook clarified the company’s stand.

    “We’re not against digital advertising,” Cook said. “I think digital advertising is going to thrive in any situation, because more and more time is spent online, less and less is spent on linear TV. And digital advertising will do well in any situation. The question is, do we allow the building of this detailed profile to exist without your consent?”

    Cook framed Apple’s actions in the context of protecting its users.

    “We feel so much that it’s our responsibility to help our users be able to make this decision. We’re not going to make the decision for them. Because it’s not our decision either. It should be each of ours’ as to what happens with our data. Who has it and how they use it,” Cook continued.

    Cook also addressed why companies like Facebook and Procter & Gamble are so opposed to Apple’s efforts. P&G has even gone so far as to work with a Chinese ad agency to find ways of bypassing ATT.

    According to Cook, these companies are only concerned because they’re facing a reality where they may not have access to the same amount of data as before, and they would only lose that access if customers choose not to give it to them. Rather than accept that change, their approach is: “You don’t want to give us access to all your data, so we’re going to try to find ways around your choice and collect your data anyway.”

    Regardless of whether you’re an Apple or Android user, Apple’s stance on privacy is a refreshing one — one where the customer comes first.

  • UK Will Require Tech Firms to Verify User Identity When Posting Online

    UK Will Require Tech Firms to Verify User Identity When Posting Online

    The UK is preparing to implement its Online Safety Bill, including provisions that will require tech firms to combat online trolls with ID verification.

    The nature of the internet has often been at odds with societal good. By design, the internet was built around anonymity. In recent years, however, that anonymity has come under increasing scrutiny as online harassment and trolls have become a major issue. The issue has especially come into focus as anonymous accounts have been used to spread misinformation, often with far-reaching consequences.

    The UK wants to address that issue by stripping away that anonymity and requiring tech companies to verify user identities, according to CNBC.

    “Tech firms have a responsibility to stop anonymous trolls polluting their platforms,” U.K. Digital Minister Nadine Dorries said in a statement Friday.

    “People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms.”

    Needless to say, online platforms are not happy with the UK’s plans. Many online platforms, civil rights groups, and privacy advocates view anonymity as an important element to preserving people’s safety and privacy, especially against oppressive regimes.

    The Online Safety Bill doesn’t specify how tech companies should implement ID verification, leaving them leeway to find the method that best fits. Some of the potential options include facial recognition, two-factor authentication, or verification with some form of government ID.

  • More Surveillance Than China — Clearview AI’s Business Plan

    More Surveillance Than China — Clearview AI’s Business Plan

    Few companies would proudly tout their business plan as offering more comprehensive surveillance than China, but that’s exactly what Clearview AI is doing.

    Clearview AI gained fame and notoriety for scraping images from popular websites and social media platforms in an effort to build a massive database of photos for facial recognition — and in violation of those platforms’ terms. The company claimed to only provide its software to law enforcement and government agencies, but reports indicate it was far more loose than it admitted, in terms of who had access to its platform. In addition, the company was found to be working with various authoritarian regimes.

    As if the company couldn’t become anymore controversial, The Washington Post reports the company is proudly calling its surveillance platform more comprehensive than similar systems in China, thanks to the “public source metadata” and “social linkage” information the company bases its product on.

    Clearview is also working to establish itself as the leader in the field, at a time when the industry leaders are taking a more responsible, measured approach to facial recognition. Clearview, in contrast, sees Microsoft, Amazon, and IBM’s cautious approach as a market opportunity, as it seeks to gain investment for a massive expansion effort.

    What’s more, accord to The Post, the company is sending out conflicting messages about its plans. Until now, Clearview has promised it will only sell to law enforcement and government agencies. In the presentation material view by The Post, however, government contracts are shown as only making up a small portion of the company’s potential market. The presentation material discusses building out the company’s personnel, specifically to target the financial and commercial market. Even more alarming, Clearview wants to build a “developer ecosystem” to help other companies use its database in their own products.

    Jack Poulson, a former Google research scientist and current head of research advocacy group Tech Inquiry, asked if there was anything “they wouldn’t sell this mass surveillance for? If they’re selling it for just regular commercial uses, that’s just mass surveillance writ large. It’s not targeted toward the most extreme cases, as they’ve pledged in the past.”

    Clearview’s unethical behavior and irresponsible approach to privacy and data security, not to mention the legal implications of its data collection, have already led to multiple lawsuits, investigations, and bans in some countries and jurisdictions.

    Here’s to hoping more countries crack down on this bottom-feeder.

  • Mozilla and Meta Team Up on Privacy-Respecting Ad Tech

    Mozilla and Meta Team Up on Privacy-Respecting Ad Tech

    Mozilla and Meta have teamed up in one of the most unlikely pairings, in an effort to create privacy-respecting ad tech.

    The advertising industry is currently caught in a dilemma between mining the information it needs to be profitable and respecting user privacy. The two have generally been mutually exclusive, with privacy losing out — at least until recently. Efforts by Apple to improve privacy and give users options to reduce how much companies can track their activity have made a significant dent in many ad companies’ business, including Meta.

    Mozilla and Meta appear to be solving one of the biggest issues in the advertising vs privacy debate, how to effectively deal with attribution, an important quantifier in helping advertisers know how effective their campaigns are.

    Mozilla’s Martin Thomson described the two companies’ solution in a blog post:

    For the last few months we have been working with a team from Meta (formerly Facebook) on a new proposal that aims to enable conversion measurement – or attribution – for advertising called Interoperable Private Attribution, or IPA.

    IPA aims to provide advertisers with the ability to perform attribution while providing strong privacy guarantees. IPA has two key privacy-preserving features. First, it uses Multi-Party Computation (MPC) to avoid allowing any single entity — websites, browser makers, or advertisers — to learn about user behavior. Mozilla has some experience with MPC systems as we’ve deployed Prio for privacy-preserving telemetry. Second, it is an aggregated system, which means that it produces results that cannot be linked to individual users. Together these features mean that IPA cannot be used to track or profile users.

    The key to IPA’s success will be whether enough companies adopt it. Having Mozilla and Meta — two organizations on the extreme opposite ends of the privacy spectrum — collaborating on it is sure to make other companies take notice.

  • Apple Recorded Some Siri Interactions, Even If the Setting Was Disabled

    Apple Recorded Some Siri Interactions, Even If the Setting Was Disabled

    Apple has said it inadvertently recorded some customers’ Siri interactions, even when the setting was disabled.

    Apple gives users the choice to share their Siri interactions in an effort to improve the virtual assistant. If the option is enabled, Apple can store and analyze those recordings.

    Evidently, when iOS 15 was released, a bug activated the feature for some users, despite them previously deactivating it. As soon as Apple realized the issue, it took steps to rectify it.

    “With iOS 15.2, we turned off the Improve Siri & Dictation setting for many Siri users while we fixed a bug introduced with iOS 15,” Apple spokesperson Catherine Franklin told The Verge. “This bug inadvertently enabled the setting for a small portion of devices. Since identifying the bug, we stopped reviewing and are deleting audio received from all affected devices.”

    Apple has not disclosed how many users were impacted, although the company says the bug impacted “a small portion of devices.”

    As ZDNet highlights, it appears the bug fix resets the permission warning as well, with iOS 15.4 asking users for permission to use their recordings.

  • Like a Bad Penny the EARN IT Act Is Back

    Like a Bad Penny the EARN IT Act Is Back

    In the latest attack on privacy and encryption, lawmakers have re-introduced the EARN IT Act, described as “one of the worst pieces of Internet legislation.”

    The Eliminating Abuse and Rampant Neglect of Interactive Technologies Act is a piece of wildly unpopular legislation that was originally introduced in 2020. The goal of the legislation was to protect children and help eliminate online sexual abuse, obviously admirable goals that any decent human being supports.

    Unfortunately, when it was first introduced, the bill essentially sounded a death knell on encryption, which is the very basis of online privacy and security, and treated every online citizen as a suspect. The bill would have required companies to follow mandatory “best practices,” practices that would have forced companies to weaken encryption in order to comply.

    In its original incarnation, the bill was eventually amended to exclude encryption from the list of things that could increase corporate liability, and the “best practices” were changed to recommendations instead of requirements. Nonetheless, the bill remained unpopular enough to eventually be dropped.

    Mass Surveillance Is Once Again on the Table

    Despite its unpopularity, Senators Richard Blumenthal and Lindsay Graham have once again reintroduced it. The Electronic Frontier Foundation (EFF) describes the sweeping impact the bill would have.

    Let’s be clear: the new EARN IT Act would pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe. It’s a framework for private actors to scan every message sent online and report violations to law enforcement. And it might not stop there. The EARN IT Act could ensure that anything hosted online—backups, websites, cloud photos, and more—is scanned.

    The bill’s goal is multi-pronged:

    • First and foremost, it attacks end-to-end encryption, encouraging “states to pass laws that will punish companies when they deploy end-to-end encryption, or offer other encrypted services.”
    • The bill encourages the use of government-approved software that will be used to scan everything sent online.
    • The bill paves the way for the establishment of a 19-person commission, made up largely of law enforcement personnel, that will establish voluntary “best practices” for companies to follow.

    As the EFF points out, despite provisions being added to protect encryption, the provisions fall far short of actually doing so. The door is still left wide open for companies to be held liable for what users of their platforms do, with a platform’s use of encryption being held up as an “evidence” of its culpability.

    Further, the bill essentially deputizes tech companies in an effort to do an end-run around the legal and constitutional issues of having a government-run surveillance state.

    The EARN IT Act doesn’t target Big Tech. It targets every individual internet user, treating us all as potential criminals who deserve to have every single message, photograph, and document scanned and checked against a government database. Since direct government surveillance would be blatantly unconstitutional and provoke public outrage, EARN IT uses tech companies—from the largest ones to the very smallest ones—as its tools.

    In view of the enormity of problems the EARN IT act causes, Evan Greer, Director of digital human rights group Fight for the Future, said:

    The EARN IT Act is truly one of the worst pieces of Internet legislation I have seen in my entire career, and … that’s saying a lot. Please, we need REAL solutions to the harms of Big Tech, not poorly written laws that will get people killed and do more harm than good /endrant

    — Evan Greer (@evan_greer), January 31, 2022

  • LG: We Know What Users Want — More Ads On Their TVs

    LG: We Know What Users Want — More Ads On Their TVs

    In the latest example of unadulterated greed, LG is planning to serve its users even more ads on TVs they have spent hundred of dollars on.

    As we have pointed out many times in the past, it’s one thing for companies like Google or Facebook to make money selling ads to people. They are, after all, providing their services free-of-charge. When a customer spends hundreds of dollars on a piece of hardware, however, there’s a certain expectation that they will get to enjoy that product ad-free.

    Those days may be over, if LG Ad Solutions has anything to say about. Not content to charge a premium for its TVs, the company is evidently planning on bombarding its users with even more ads. Unlike traditional TV commercials, these ads are in LG’s smart TV interface, meaning there’s no way to easily avoid them when using the built-in features.

    “We’re turning the tables for advertisers, making performance not just something brands aim for, but something that is actually guaranteed,” said chief executive officer Raghu Kodige. “Whether driving sales, conversions, or customer acquisition, advertisers struggle to quantify ROAS for TV spend. We created this extensive program as the starting point in a new paradigm for TV-driven outcomes in which marketers are assured every CTV ad dollar hits the bullseye.”

    Worse yet, the company plans on greatly expanding the metrics it uses to track the effectiveness of ad campaigns.

    The conversion metrics program will begin immediately with app installs and is available globally. More conversion metrics such as tune-in, web visits, physical location visits, and more, will be available in the coming months both in the U.S. and globally. 

    There’s just one thing LG seems to have forgotten: Advertisers aren’t the ones buying their TVs, meaning advertisers should not be the company’s prime concern — its customers should be. 

    Fortunately, users still have a way to opt out, albeit at an added expense. Users who don’t want to see LG’s ads should not give the TV internet access and use a third-party device, such as an Apple TV, instead.

  • DC AG Sues Google For Using ‘Dark Patterns’ to Undermine Privacy

    DC AG Sues Google For Using ‘Dark Patterns’ to Undermine Privacy

    Google is once again in the crosshairs for its privacy (or lack thereof), with the DC Attorney General suing it over “Dark Pattern” practices.

    Dark Patterns are deceptive practices some websites and apps use to trick users into buying things or taking actions they otherwise wouldn’t, or didn’t mean to. The website DarkPatterns.org is dedicated to shaming companies that engage in this type of behavior.

    Google is now facing accusation from the DC Attorney General Karl A. Racine that it is using such Dark Patterns to get its customers to compromise their privacy.

    To gain access to user location data, Google manipulates its users through deceptive design choices that alter user decision-making in ways that harm the user and benefit Google. These practices are known as “dark patterns.” Google has made extensive use of dark patterns—such as repeated nudging, misleading pressure tactics, and evasive and deceptive descriptions of features and settings—to stop users from protecting their privacy and cause them to provide more and more data inadvertently or out of frustration.

    AG Racine also accuses the company of making it impossible for customers to truly opt out of location tracking, deceiving customers about how much control they have over their privacy, and misleading customers about how much changing device settings really protects their privacy.

    AG Racine is leading a coordinated, bipartisan effort to take Google to task for these actions, with the Indiana, Texas, and Washington AGs also filing lawsuits against Google in their states.

    “Google falsely led consumers to believe that changing their account and device settings would allow customers to protect their privacy and control what personal data the company could access,” said AG Racine. “The truth is that contrary to Google’s representations it continues to systematically surveil customers and profit from customer data. Googles bold misrepresentations are a clear violation of consumers’ privacy. I’m proud to lead this bipartisan group of attorneys general that will hold Google accountable for its deception. Through this lawsuit, we will hold Google accountable, and in the process, educate consumers on how their personal data—particularly sensitive data about their physical location—is collected, stored, and monetized. This result of our collective action is that consumers, not Google, will determine how their data is or is not used.”