WebProNews

Tag: CSAM

  • Apple Abandons Plans to Scan Devices for CSAM

    Apple Abandons Plans to Scan Devices for CSAM

    Apple has completely abandoned one of its most controversial initiatives that would have involved scanning all devices for CSAM.

    Tech companies are always looking for ways to identify and root out Child Sexual Abuse Material (CSAM) from their platforms. Google, Microsoft, Meta, and others routinely scan content on their cloud platforms against a centralized database of CSAM content maintained by the National Center for Missing & Exploited Children (NCMEC).

    Apple’s proposed solution was much different. Apple created a two-step process that involved scanning a consumer’s device. Apple planned to install a database of hashes representing the files in NCMEC’s database on each and every iPhone, iPad, Mac, and Apple TV.

    To be clear, Apple was not going to place CSAM material on devices, only mathematical hashes that represent them. Any device with iCloud enabled would then run the same mathematical hash on local photos and videos and compare them to the database of NCMEC hashes. Once a threshold of matches was reached, the case would undergo human review before being forwarded to the authorities if the matches were accurate. Until that happened, all results would remain completely anonymous.

    Read More: The Biggest Beneficiary of Apple’s Privacy Crackdown: Apple

    After pushback from the industry and security and privacy experts, Apple originally delayed rollout and has now abandoned its plans in favor of other, less dangerous methods.

    “After extensive consultation with experts to gather feedback on child protection initiatives we proposed last year, we are deepening our investment in the Communication Safety feature that we first made available in December 2021,” the company told WIRED in a statement. “We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos. Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all.”

    The company will instead focus on its opt-in Communication Safety features that parents can activate to flag inappropriate texts, pictures, and videos sent to their children via iMessage.

    “Potential child exploitation can be interrupted before it happens by providing opt-in tools for parents to help protect their children from unsafe communications,” the company continued in its statement. “Apple is dedicated to developing innovative privacy-preserving solutions to combat Child Sexual Abuse Material and protect children, while addressing the unique privacy needs of personal communications and data storage.”

    See Also: Apple’s Privacy Hypocrisy: The $15 Billion Google Deal

    The new approach is a far more balanced one to the responsibilities Apple is trying to wield while preserving individual privacy. While Apple’s original scanning approach seemed promising in terms of privacy, it also posed a host of problems. Security and privacy experts immediately pointed out the danger of Apple being forced by governments to use its matching algorithm for other purposes, such as political, religious, or human rights surveillance. There are also documented instances of non-CSAM images being placed in the NCMEC database, opening the possibility of false positives.

    Not surprisingly, the EU recently proposed new rules that sound eerily similar to Apple’s method, while simultaneously acknowledging “the detection process would be the most intrusive one for users.”

    Interestingly, Princeton researchers developed a similar system shortly before Apple and ultimately tabled it, and wrote a paper on why it should never be used.

    “Our system could be easily repurposed for surveillance and censorship,” the researchers wrote. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.”

    Overall, Apple’s announcement is a welcome one. To be fair, however, more time will need to pass to ensure Apple lives up to its promise and has not been forced to implement its scanning technology covertly.

  • Apple Delays CSAM Scanning Amid Backlash

    Apple Delays CSAM Scanning Amid Backlash

    Apple has announced it is delaying its plans to implement CSAM (Child Sexual Abuse Material) scanning amid backlash from all sides.

    Apple previously announced plans to implement CSAM scanning in the next versions of iOS, iPadOS and macOS. While many companies, including Microsoft, Google, Dropbox and Facebook, all scan their servers for CSAM, Apple’s solution was unique in that part of the process would take place on-device.

    The backlash was immediate and severe, leading Apple to try to explain how the system worked. Ultimately, it has not been successful assuaging people’s concerns and is now delaying the feature, according to a statement on its website.

    Update as of September 3, 2021: Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material. Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.

    Unfortunately, Apple’s statement is small comfort, as many have pointed out that any attempt to continue with the original goal is extremely dangerous. In our upcoming multi-part breakdown of Apple’s plans, we’ll explain why.

  • Apple Will Check Photo Uploads for Child Sex Abuse Images

    Apple Will Check Photo Uploads for Child Sex Abuse Images

    Apple will begin checking photos being uploaded to its iCloud service against a database of Child Sexual Abuse Material (CSAM), in an effort to protect children.

    In the battle over encryption — known as the Crypto Wars — governments have often used protecting children as justification for promoting backdoors in encryption and security. Unfortunately, not matter how well-intentioned, as we have highlighted before, there is no way to securely create a backdoor in encryption that will be safe from exploitation by others.

    Apple appears to be trying to offer a compromise solution, one that would preserve privacy, while still protecting children.

    Apple outlined how its CSAM system will work:

    Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

    Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

    Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

    Needless to say, Apple’s announcement has been met with a variety of responses. The Electronic Frontier Foundation (EFF), in particular, has been highly critical of Apple’s decision, even accusing the company of going back on its former privacy stance and embracing backdoors.

    The EFF is particularly concerned Apple’s new system could be broadened to include speech, or virtually anything, governments may not approve of. While there is certainly a concern the system could be abused that way, it’s also a far cry from using an on-device method for screening something as vile as CSAM vs using it to monitor speech.

    In many ways, Apple’s new approach to combatting CSAM is somewhat similar to its approach to combatting malware. There have been times in the past when Apple took the liberty of proactively removing particularly dangerous malware from devices. Critics could argue that Apple could extend that, at the behest of governments, to removing any programs deemed offense. But that hasn’t happened. Why? Because there’s a big difference between removing malware and censoring applications.

    The National Center for Missing & Exploited Children, admittedly a critic of end-to-end encryption, praised Apple’s decision.

    “With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,” John Clark, chief executive of the NCMEC, said in a statement, via Reuters. “The reality is that privacy and child protection can co-exist.”

    Ultimately, only time will tell if Apple has struck the right balance between privacy and child protection. It’s worth noting Microsoft, Google and Facebook already have similar systems in place, but Apple believes its system offers significant benefits in the realm of privacy.

    In addition to going a long way toward protecting children, it’s also possible Apple’s willingness to make this concession will disarm one of the biggest arguments against end-to-end encryption, preserving the technology against legislative action.