WebProNews

Tag: censorship

  • Google CEO Criticized For Response to AI Researcher’s Exit

    Google CEO Criticized For Response to AI Researcher’s Exit

    Google CEO Sundar Pichai has sent an email to Google employees in an effort to address backlash the company is facing over Dr. Timnit Gebru’s exit.

    Timnit Gebru is one of the leading artificial intelligence ethics researcher in the world, widely respected for her expertise. An issue arose as a result of a research paper Gebru and other researchers were working on. The paper tackled the ethical issues with large-scale AI language models (LLMs), and was approved internally on October 8. According to Gebru, she was later asked to remove her name from the paper because an internal review found it to be objectionable.

    As Gebru later pointed out in an interview with Wiredresearchers must be free to go where the research takes them.

    You’re not going to have papers that make the company happy all the time and don’t point out problems. That’s antithetical to what it means to be that kind of researcher.

    Google’s head of AI, Jeff Dean, said the paper was not submitted with the necessary two-week lead time. Gebru’s team, however, wrote in a blog post supporting Gebru that “this is a standard which was applied unevenly and discriminatorily.”

    As a result, Gebru gave her supervisors some conditions she wanted met, otherwise she would work toward an amicable exit from the company. According to her team, the conditions “were for 1) transparency around who was involved in calling for the retraction of the paper, 2) having a series of meetings with the Ethical AI team, and 3) understanding the parameters of what would be acceptable research at Google.”

    Instead of working with Gebru, her supervisors accepted her “resignation” effective immediately. Gebru’s team is quick to point out that “Dr. Gebru did not resign,” (italics theirs) and was instead terminated.

    The company’s actions brought swift and vocal backlash. Some 2,351 Googlers, along with 3,729 supporters in academia, industry and civil society have signed a petition in support of Gebru at the time of writing. It seems Pichai and Company realize the situation is not going away without being addressed.

    In an email to employees, first published by Axios, Pichai attempted to do damage control, apologizing for what happened and vowing to do better in the future.

    So far, the email has not been met with praise. Gebru took to Twitter to criticize the lack of accountability, as well as the insinuation she was an “angry Black woman” for whom a de-escalation strategy was needed.

    Similarly, others are criticizing Pichai’s email for essentially being tone-deaf. Jack Clark, Open AIPolicy Director, is one such voice.

    In our initial coverage of this situation, we stated: “It goes without saying that Google is providing a case study in how not to handle this kind of situation.”

    In the aftermath of Pichai’s email, that statement continues to ring true.

    Here’s the email in full:

    Hi everyone,

    One of the things I’ve been most proud of this year is how Googlers from across the company came together to address our racial equity commitments. It’s hard, important work, and while we’re steadfast in our commitment to do better, we have a lot to learn and improve. An important piece of this is learning from our experiences like the departure of Dr. Timnit Gebru.

    I’ve heard the reaction to Dr. Gebru’s departure loud and clear: it seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust.

    First – we need to assess the circumstances that led up to Dr. Gebru’s departure, examining where we could have improved and led a more respectful process. We will begin a review of what happened to identify all the points where we can learn — considering everything from de-escalation strategies to new processes we can put in place. Jeff and I have spoken and are fully committed to doing this. One of the best aspects of Google’s engineering culture is our sincere desire to understand where things go wrong and how we can improve.

    Second – we need to accept responsibility for the fact that a prominent Black, female leader with immense talent left Google unhappily. This loss has had a ripple effect through some of our least represented communities, who saw themselves and some of their experiences reflected in Dr. Gebru’s. It was also keenly felt because Dr. Gebru is an expert in an important area of AI Ethics that we must continue to make progress on — progress that depends on our ability to ask ourselves challenging questions.

    It’s incredibly important to me that our Black, women, and underrepresented Googlers know that we value you and you do belong at Google. And the burden of pushing us to do better should not fall on your shoulders. We started a conversation together earlier this year when we announced a broad set of racial equity commitments to take a fresh look at all of our systems from hiring and leveling, to promotion and retention, and to address the need for leadership accountability across all of these steps. The events of the last week are a painful but important reminder of the progress we still need to make.

    This is a top priority for me and Google leads, and I want to recommit to translating the energy that we’ve seen this year into real change as we move forward into 2021 and beyond.

    — Sundar

  • Section 230’s Future Is Shaky…Even With a Biden/Harris Administration

    Section 230’s Future Is Shaky…Even With a Biden/Harris Administration

    The controversial Section 230, protecting social media companies, may be under threat even with the incoming Biden/Harris administration.

    Section 230 of the Communications Decency Act protects online platforms from being legally responsible for the content their users post. This has, in some ways, given rise to the toxic culture often associated with social media, as there are not strong incentives for companies to crack down on hate speech, cyberbullying and the like.

    While companies have slowly began to self-moderate, it has increasingly become a murky situation. On the one hand, some critics have praised Facebook, Twitter and others for beginning to crack down on some content, while others have decried their attempts as censorship. These accusations have come from the very heights of government, as President Trump has alternated between using Twitter as his preferred communication platform and blasting the company when it flags his posts containing misinformation. As a result, most recently, Trump has even threatened to veto a defense spending bill unless Section 230 is repealed.

    The situation is further complicated by the very fact that social media companies have begun moderating content. Critics argue the companies no longer need, nor should have, the protections of Section 230 since they’ve already begun to self-moderate—the very thing they weren’t legally required to do.

    While Trump has been clamoring for the repeal of Section 230, some had thought a new administration might take a different approach. It appears, however, that Section 230’s future may still be uncertain.

    At a virtual book launch hosted by Georgetown Law, Bruce Reed—who served as a top tech advisor for President-elect Joe Biden during his campaign—made the case for changes to Section 230.

    I think there’s an emerging consensus that it’s long past time to hold the big social media platforms accountable for what’s published on their platforms, the way we do newspaper publishers and broadcasters.

    Needless to say, Reed’s comments are non-binding. In the interview he even goes so far as to say that he doesn’t speak for the new administration’s tech policy. Nonetheless, his observations come from years serving as a close associate of Biden, both as a campaign tech advisor and as his chief of staff during his time as vice-president.

    Therefore, while non-binding, Reed’s comments may very well indicate change is on the horizon for Section 230.

  • Justice Department Recommends Rolling Back Big Tech Protections

    Justice Department Recommends Rolling Back Big Tech Protections

    In the wake of President Trump’s executive order targeting social media companies, the Department of Justice (DOJ) has proposed rolling back tech protections.

    The issue started several weeks ago when Twitter, for the first time ever, fact-checked Trump on two of his tweets. As a result, Twitter suddenly found itself in the crosshairs of the president, who wasted no time signing an executive order to target the legal protections tech companies enjoy.

    Now the DOJ has taken up the banner, proposing changes to Section 230 of the Communications Decency Act of 1996. Section 230 largely grants immunity to tech companies for what their users post on their platforms. This immunity has helped tech and social media companies to grow, with minimal concern about the legal repercussions of what their users say.

    “When it comes to issues of public safety, the government is the one who must act on behalf of society at large. Law enforcement cannot delegate our obligations to protect the safety of the American people purely to the judgment of profit-seeking private firms. We must shape the incentives for companies to create a safer environment, which is what Section 230 was originally intended to do,” said Attorney General William P. Barr. “Taken together, these reforms will ensure that Section 230 immunity incentivizes online platforms to be responsible actors. These reforms are targeted at platforms to make certain they are appropriately addressing illegal and exploitive content while continuing to preserve a vibrant, open, and competitive internet.”

    The proposed changes center around four primary goals, including incentivizing platforms to address illicit content, being more transparent in how content is moderated, clarifying the government’s enforcement powers and promoting competition.

    It remains to be seen if any proposed changes will gain enough traction in Congress. Section 230 has been around as long as it has specifically because navigating these issues can quickly turn into a quagmire.

  • Zoom In Hot Water Over Censorship On Behalf of Beijing

    Zoom In Hot Water Over Censorship On Behalf of Beijing

    Zoom has found itself in hot water again, this time over suspending accounts at the request of the Chinese government.

    In a blog post, the company details how it was approached by Chinese officials regarding multiple accounts that were hosting large meetings commemorating the anniversary of Tiananmen Square. In three of the four instances, Zoom suspended the accounts due to a large number of the participants being from mainland China.

    The company points out that all three of the suspended accounts have since been reactivated, and outlines the mistakes it made in how it handled China’s requests, as well as what it is doing to prevent this situation in the future. Specifically, while the company must comply with local laws, Zooms says that it should not have taken action that impacted those outside of China by shutting down the meetings and suspending or terminating the three accounts.

    Instead, Zoom says it should have blocked participants by country or let the meetings run. Currently, the company does not have the ability to block participants by country, but rightly acknowledges it should have anticipated such a need.

    In the meantime, the company says it will no longer allow requests from the Chinese government to impact users outside of China, and will update its policies for handling such matters.

    On the technical side, the company is working on the ability to block participants by country.

    “Zoom is developing technology over the next several days that will enable us to remove or block at the participant level based on geography,” reads the blog post. “This will enable us to comply with requests from local authorities when they determine activity on our platform is illegal within their borders; however, we will also be able to protect these conversations for participants outside of those borders where the activity is allowed.”

    Despite these steps, US lawmakers are already asking for clarification from Zoom regarding this fiasco, according to Reuters.

  • Senator Hawley Questions Google CEO Over China Censorship

    Senator Hawley Questions Google CEO Over China Censorship

    YouTube is in hot water over claims it engaged in censorship on behalf of the Chinese government.

    In a letter to Google and Alphabet CEO Sundar Pichai, Senator Josh Hawley asked for an explanation about the alleged censorship. YouTube has explained the censorship occurred as a result of an error in its enforcement system, but has provided very little information beyond that. Understandably, the explanation is doing little to ease people’s concerns.

    In his letter to Pichai, Hawley says “that Google engineers may have changed the algorithms on YouTube to automatically censor certain criticisms of the Chinese Communist Party. In particular, Google engineers appear to have altered YouTube code to automatically block the Chinese terms for “communist bandit” and “50-cent party”—the latter term referring to a division of the Chinese Communist Party whose purpose is deflecting criticism from the Party by using sockpuppet accounts to spread online propaganda. These reports follow in the wake of Google’s purported ‘mis’-translation last year of the phrase ‘I am sad to see Hong Kong become part of China’ to ‘I am happy to see Hong Kong become part of China.’”

    Senator Hawley gave Pichai till June 12 to respond. It remains to be seen if Google will provide concrete information on the issue.

  • Facebook Caves to Pressures to Censor ‘Anti-State’ Posts in Vietnam

    Facebook Caves to Pressures to Censor ‘Anti-State’ Posts in Vietnam

    Facebook has confirmed it is censoring anti-government posts in Vietnam following restrictions that throttled local access to its site.

    According to Reuters, Facebook received takedown orders for content deemed “anti-state.” To ensure compliance, the government ordered Facebook’s local servers to be taken down, significantly impacting the site’s performance for local users. In some cases, the website became completely unusable. The action is not surprising, as Vietnam currently ranks 175 out of 180 on the Reporters Without Borders’ World Press Freedom Index.

    As a result of the measures taken, Facebook ultimately gave in, agreeing to censor “anti-state” messages.

    “Once we committed to restricting more content, then after that, the servers were turned back online by the telecommunications operators,” one of Reuters’ sources said.

    Now that Facebook has shown what it takes to make the company cave, it will be interesting to see how many other countries follow suit.

  • What Not to Do: TikTok Censors ‘Ugly,’ ‘Poor’ and ‘Disabled’

    What Not to Do: TikTok Censors ‘Ugly,’ ‘Poor’ and ‘Disabled’

    It may be the one of the hottest social media platforms, but TikTok is providing a template of what not to do.

    Reporting for The Intercept, Sam Biddle, Paulo Victor Ribeiro and Tatiana Dias say that the company behind TikTok has “instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform, according to internal documents obtained by The Intercept.”

    TikTok has faced ongoing scrutiny over privacy and security concerns. The Pentagon released guidance instructing military personnel to delete the app, and the company faces a lawsuit in California over allegations it uploaded videos to China without user consent. The app has also been dogged by censorship concerns and even announced a Transparency Center, for critics to analyze how the company moderates posts.

    According to The Intercept, “moderators were also told to censor political speech in TikTok livestreams, punishing those who harmed ‘national honor’ or broadcast streams about ‘state organs such as police’ with bans from the platform.” The policy also called for TikTok moderators “to suppress uploads from users with flaws both congenital and inevitable. ‘Abnormal body shape,’ ‘ugly facial looks,’ dwarfism, and ‘obvious beer belly,’ ‘too many wrinkles,’ ‘eye disorders,’ and many other ‘low quality’ traits are all enough to keep uploads out of the algorithmic fire hose. Videos in which ‘the shooting environment is shabby and dilapidated,’ including but ‘not limited to … slums, rural fields’ and ‘dilapidated housing’ were also systematically hidden from new users, though ‘rural beautiful natural scenery could be exempted,’ the document notes.”

    Although a TikTok spokesman said the measures were anti-bullying policies that were no longer in effect, the documents The Intercept reviewed explicitly cited subscriber growth as the real reason.

    Given TikTok’s ongoing privacy and security issues, not to mention this kind of mismanagement and missteps, it’s probably a safe bet that TikTok’s growth may be about to experience a slowdown.

  • RSF Creates ‘The Uncensored Library’ In Minecraft

    RSF Creates ‘The Uncensored Library’ In Minecraft

    Minecraft is already one of the most successful video games in the world, but now it’s also serving to help preserve information in its uncensored form.

    Minecraft is a nearly infinite, open-world game that lets users create virtually anything they can image. Rather than creating a building or scene from a movie or TV show, NGO Reporters Without Borders (RSF) has created a virtual library to house works that were originally censored in their countries of origin.

    “Minecraft is a favourite – one of the world’s most successful computer games, with more than 145 million active players every month,” reads the statement. “Here communities can build entire worlds out of blocks, experience the freedom of an open world. Its creative mode is often described as ‘digital Lego’. In these countries, where websites, blogs and free press in general are strictly limited, Minecraft is still accessible by everyone.

    “Reporters Without Borders (RSF) used this backdoor to build ‘The Uncensored Library’: A library that is now accessible on an open server for Minecraft players around the globe. The library is filled with books, containing articles that were censored in their country of origin. These articles are now available again within Minecraft hidden from government surveillance technology inside a computer game. The books can be read by everyone on the server, but their content cannot be changed. The library is growing, with more and more books being added to overcome censorship.”

    RSF’s ingenious use of Minecraft is a perfect example of the innovative ways technology—including video games—can be used to address serious issues. According to the RSF, “the Uncensored Library is accessible through Minecraft with the server address: visit.uncensoredlibrary.com.”

  • TikTok Plans Transparency Center, Tries to Dispel Censorship Claims

    TikTok Plans Transparency Center, Tries to Dispel Censorship Claims

    TikTok has announced the upcoming launch of a new Transparency Center, aimed at pulling the curtain back on the platform’s moderation efforts.

    TikTok has faced ongoing scrutiny over privacy concerns, with at least one lawsuit alleging the company secretly recorded videos and uploaded them to servers in China. Concerns over the app prompted the Department of Defense (DOD) to instruct all personnel to uninstall the app, and for Reddit’s CEO to label the social media app “fundamentally parasitic.”

    In an effort to address concerns, including allegations it censors users, TikTok is launching its Transparency Center where outside experts will have “an opportunity to directly view how our teams at TikTok go about the day-to-day challenging, but critically important, work of moderating content on the platform.

    “Through this direct observation of our Trust & Safety practices, experts will get a chance to evaluate our moderation systems, processes and policies in a holistic manner.”

    Although the Transparency Center initially focuses on censorship, it will eventually help address other security and privacy concerns as well.

    “The Transparency Center will open in early May with an initial focus on TikTok’s content moderation. Later, we will expand the Center to include insight into our source code, and our efforts around data privacy and security. This second phase of the initiative will be spearheaded by our newly appointed Chief Information Security Officer, Roland Cloutier, who starts with the company next month.”

  • Amazon Threatens Employees Speaking Out Against Its Climate Policies

    Amazon Threatens Employees Speaking Out Against Its Climate Policies

    The Washington Post is reporting that Amazon has warned at least two employees for speaking out against its climate policies.

    In September, Amazon CEO Jeff Bezos (who owns the Washington Post) announced The Climate Pledge, the company’s commitment to meet the Paris Agreement 10 years early. In spite of that, some employees have been critical of the company’s climate efforts, two of whom were quoted in a previous Washington Post article.

    Evidently, Amazon did not take kindly to two of its employees criticizing the company.

    According to the Washington Post, “a lawyer in the e-commerce giant’s employee-relations group sent a letter to two workers quoted in an October Washington Post report, accusing them of violating the company’s external communications policy. An email sent to Maren Costa, a principal user-experience designer at the company, and reviewed by The Post warned that future infractions could ‘result in formal corrective action, up to and including termination of your employment with Amazon.’”

    The company defended its external communication policy as “similar to other large companies,” according to a spokeswoman. In spite of that, Costa vowed to continue speaking up and fighting the company’s censorship.

    This is just the latest in what has been termed “employee activism,” where employees hold the companies they work for responsible for their actions. With this trend on the rise, companies have had to be far more careful to take their employees’ values into consideration when making decisions.

    “No company today is completely immune to these types of risks, so the issue is how to minimise their potential and recover quickly if damaging events occur,” Leslie Gaines-Ross, Chief Reputation Strategist at PR firm Weber Shandwick, told CEO Magazine.

    Amazon may find itself in a sticky situation if it fails to deliver on its Climate Pledge, or engages in other things that undermine it.

  • TikTok Releases Transparency Report In Effort To Quell Concerns

    TikTok Releases Transparency Report In Effort To Quell Concerns

    TikTok has released its first ever transparency report amid increasing scrutiny related to privacy and censorship, according to NBC News.

    TikTok has been in the news a lot lately, and not in the way any company wants to be. The Department of Defense recently released guidance instructing personnel to delete the app, with both the Navy and Army following suit.

    Its problems have also included a lawsuit alleging the app created an account and uploaded videos and face scans to servers in China. The plaintiff alleges that, while they downloaded the app, they had never set up an account.

    In view of the concerns, “Senate Democratic leader Chuck Schumer of New York and Sen. Tom Cotton, R-Ark., a member of the Armed Services and Intelligence committees, sent a letter asking Joseph Maguire, the acting director of national intelligence, to assess TikTok and other China-based companies for potential security risks.”

    In an effort to address those concerns, TikTok has released its first transparency report detailing the worldwide government requests it received in the first half of 2019. India took the top spot, with the U.S. coming in second. The company has vowed to continue releasing transparency reports moving forward.

    Notably, China is not listed in the report, although the company says it does not operate there and that data for American users is stored in the U.S.

  • Facebook, Twitter and Google Hire German Censorship Army to Avoid Millions in Fines

    Facebook, Twitter and Google Hire German Censorship Army to Avoid Millions in Fines

    According to a Wall Street Journal article, Facebook, Google and Twitter have hired censorship teams to remove user posted content that German law prohibits. As of January 1, 2018 a new law in Germany puts tech companies that allow user posted content at risk of fines of up to $60 million if they fail to remove posts deemed illegal.

    Companies must remove hate speech within 24 hours in simple cases or within 7 days where content evaluation is difficult. The new fines can apply to any social media company that allows user content such as Reddit, Gab, Tumblr, Snapchat, Instagram and more.

    Per WSJ, while Germans are only 1.5% of the Facebook user base, they are 16% of their worldwide censorship army. They have already contracted for 1,200 moderators in Germany to process user content flags. (Wall Street Journal)

    Some readers of the article feel that Germany’s new hate speech fines are really a way to control anti progressive speech and to squelch opposition to Germany’s mass muslim migration. Others think it’s simply another way to shake down profitable U.S. companies similar to what the E.U. has done to Microsoft and Google.

    The primary purpose of this new social media censorship in Germany is to stamp out criticism of Islam and Angela Merkel’s mad hat immigration policy. – Ben Qurayza

    Freedom of speech mean you can say amazingly vile things, unless you say something that directly impinges on someone elses rights. So yelling fire in a crowded room is prohibited speech. Saying Hitler is a good guy is not. The challenge when governments control speech is just that, they control the speech. We see how this works in Iran recently. The bricks that need to be crushed are those that jeopardize free speech. That should be particularly the case in Germany. – Bill Fotsch

    This is designed for Germany to milk American companies for fines. – Olesya Reyenger

    Anything critical of the leftists in power will be squashed, like they are trying to do in the US. – Richard Stanton

    Policing thought crime seems very expensive. – Peter Beacom

  • Buzzfeed Discovers Internet Censorship From Other Countries

    Buzzfeed Discovers Internet Censorship From Other Countries

    Buzzfeed is at it again today. First it decided to pick sides in a presidential election, and now it has noticed that other countries, often Muslim, don’t have the same sensibilities as Americans. And this is a big problem for social media companies like Facebook, Instagram, Snapchat and Twitter.

    Buzzfeed’s Katie Notopoulos noted, “The proliferation of internet-connected mobile phones, in theory, is bringing the world together — with people continents apart talking and sharing, fueled by the sweet nectar of social apps. But in some cases, those connections turn into collisions. Within just the last three weeks, three big American social media platforms — Instagram, Snapchat, and Twitter — have all butted up against the laws and cultural norms of local populations outside the United States.”

    Good points, although somewhat obvious. The 3 examples she gave, Iranian women posting selfies on Instagram, Indian comedian Tanmay Bhat morphing some photos on Snapchat for humor and a bunch of Twitter accounts deactivated for making fun of Putin aren’t really the worst of it.

    What’s worst is that social media companies and Google are giving into censorship demands. Just last week, Twitter, Facebook, Google and Microsoft agreed to a “Hate Speech” ban which for all intense and purposes bans normal free speech if it bashes principals of Shariah law, according to some critics. It’s well known that Saudi Prince Alwaleed, a believer in Shariah law which makes women and gay people second class citizens owns 5% of Twitter. Should we be concerned?

    Of course there are cultural differences and those differences stand in direct contrast to American values of free speech, freedom to associate, equal rights and religious freedom. In Saudi Arabia women are not allowed to drive, must cover their face in public, cannot generally mix with men and can’t vote at any level. They are second class citizens. In many countries people are not allowed to criticize the government or government leaders. Should U.S. Internet companies go along with this? Are we in effect helping governments keep their populations from rising up for freedom? These are tough questions but perhaps its time that the United States enacted laws of its own requiring U.S. based companies not to compromise the principals of freedom with social media and search engines. They might ban their use in certain countries, but so what?

    Conservative provocateur Breitbart London produced a video to get the word out about what it calls the EU’s “Orwellian” new online censorship deal:

    Some of the comments in the Buzzfeed article seem to get that the story is not just the obvious butting heads of different cultures:

    Jean Alexander Steffen

    They should continue exposing the faults in society’s moral/cultural/legal systems. Censorship is a tool that is used to maintain a system that benefits a sector of society at the expense of others. The refusal to expose and discus issues allows problems in society to pile up, it creates inefficiencies and imbalances in society.

    And maybe the world needs to remove the giant stick from it’s bum and get over it. As others have said, exposing faults and inequalities in a system or government is a GOOD thing and very important. That’s how change happens. Oh, and good luck censoring the internet… some regimes may be able to do this for awhile, but they won’t be able to do it forever. (P.S. – ask Beyonce how hard it is (see: impossible) to get a photo scrubbed from teh interwebz, lol.)
    Kaylee Priest at least in America we’re allowed to comment on social issues that need reform. Good luck with some of those backwater misogynistic countries listed up above.
    I feel so sorry for the people who live there, mainly the women and gays… at least before YouTube & social media, they weren’t really aware of how good people have it living outside of the country. How horrid & confusing it must be to see other women driving cars, with concern only for gas prices?
  • Google Provides ‘Right To Be Forgotten’ Update

    Google Provides ‘Right To Be Forgotten’ Update

    Google shared some new numbers related to the “right to be forgotten,” ruling, which has led to individuals requesting URL removals from search results. For all the background on that, peruse our coverage here.

    The stats appear on Google’s Transparency Report, where Google now claims to have evaluated for removal 1,234,092 URLs. The total number of requests it has seen dating back to May, 2014 is 348,085.

    Here’s the latest look at the sites that are most impacted:

    Screen Shot 2015-11-25 at 11.47.39 AM

    This list, Google says, highlights the domains where it has removed the most URLs from search results. Of the total URLs requested for removal, these sites account for 9%.

    Check out the full Transparency Report here.

    Images via Google

  • Reddit Will Block Content in Certain Countries, Has Already Done So in Germany and Russia

    Reddit Will Block Content in Certain Countries, Has Already Done So in Germany and Russia

    Reddit has confirmed that it will locally block content if it receives a “valid request from and authorized entity.”

    The policy confirmation comes after the company blocked a specific thread about growing psychoactive mushrooms in Russia. The country’s Service for Supervision of Communications, Information Technology and Mass Media, Roskomnadzor, temporarily banned reddit earlier in the week. Blocking the specific post was required of reddit in order to get the entire site unblocked.

    Reddit also revealed that it has blocked the entire r/WatchPeopleDie subreddit in Germany.

    According to the company, it censored the specific content in Germany and Russia in order to “preserve the existence of reddit in those regions.”

    It’s clear that in reddit’s mind, having most of the site available is better than taking a stand and winding up having no reddit in some countries.

    Here’s the full statement from reddit:

    This week, Reddit received valid legal requests from Germany and Russia requesting the takedown of content that violated local law. As a result, /r/watchpeopledie was blocked from German IPs, and a post in /r/rudrugs was blocked from Russian IP’s in order to preserve the existence of reddit in those regions. We want to ensure our services are available to users everywhere, but if we receive a valid request from an authorized entity, we reserve the right to restrict content in a particular country. We will work to find ways to make this process more transparent and streamlined as Reddit continues to grow globally.

    Of course, reddit is far from the only tech company than institutes regional bans on content. Google does it. Twitter does it. Facebook does it. But that doesn’t mean reddit users are going to be happy about this. When it comes to any form of censorship – no matter the circumstance – you can expect a pretty loud pushback from at least some of the site’s population.

  • Russia Unblocks Reddit After Dustup over Shroom Post

    Russia Unblocks Reddit After Dustup over Shroom Post

    On Wednesday afternoon, Russia added reddit to its registry of blocked sites.

    And as of early Thursday morning, reddit has been removed from the blocked sites list.

    After warning that it was considering it, Roskomnadzor, Russia’s Federal Service for Supervision of Communications, Information Technology and Mass Media, sent out the order to block reddit on August 12th. The government was upset over a post about the “cultivation of narcotic plants” – more specifically mushrooms. Officials said they had been trying to get in touch with reddit about removing the offending thread – but didn’t hear back. So they blocked reddit.

    Apparently, reddit complied with the request to block the “offending” post and Russia authorities have unblocked the site. Unless you’re in Russia, you can check out the specific post here. It’s two years old.

    Here’s what Roskomnadzor had to say on Russian social network VKontakte:

    We thank all who by their activity on the Web has prompted administrators to listen to reddit Roskomnadzor. Aug. 13 on the administration of the resource on the “hot line” of the Federal Service received a letter, which reported on the termination of access to the territory of Russia to the forbidden information. The requirement Roskomnadzor validated inspection. In connection with this page removed from the Register of illegal information. Given that this illegal article two years ago to notify Roskomnadzor has removed one of the pages on reddit, and then appeared on the other hand, we expect that the administration of resources will continue to listen to the demands of regulatory authorities Russia in the interests of large Russian audience.

    According to redditors on the r/russia subreddit, reddit it fully accessible in Russia – except for the page in question.

    The user responsible for the mushroom-tutorial recently posted in reddit’s popular r/TIFU (Today I Fucked Up) subreddit, saying TIFU by getting Reddit banned in Russia.

    “In Russia, there is a law which allow Roskomnadzor, Russian censorship agency, to block any website without court rulling. Two years ago I tested how RKN react to abuse on popular websites/crazy abuses. On of that websites was Reddit,” they say.

    “One thing I learned is that RKN doesn’t want to block popular websites. They respond me that this content is illegal and they blocked it, but they weren’t. It was on 05/21/2013. On 10st Aug 2015 they posted a call to help them contact Reddit administration to official VK page. Funny thing, but they called Psilocybe a plant. Several hours ago they reported that Reddit is blocked in Russia. Seems like things changed.”

  • Apple Is Censoring Dr. Dre (and Everything Else) on Beats 1 Radio

    Today, Apple launched Apple Music to the world. And Apple’s always on, “progressive” radio station Beats 1 is currently censoring the music it plays worldwide.

    Apple, who gave Dr. Dre a “senior role” in the company when it acquired his Beats company last year, is bleeping out curse words in songs from The Chronic.

    “Beats 1 is a place for progressive radio programming. Alongside new music programs from our anchor DJs, we’ve invited some of the biggest artists in the world to make brilliant radio shows — from exclusive weekly DJ mixes to interviews with iconic musicians about albums that changed their lives. Beats 1 plays everything from old-school hip-hop to futuristic pop, and it’s all handpicked by people who live and breathe music,” says Apple.

    And edited tracks, like if someone gave Walmart a deck and its own radio station.

    Apple confirmed to BuzzFeed that “it is censoring explicit content on Beats 1, and it’s doing it worldwide. The company declined to provide any further comment.”

    Other popular internet radio destinations like Pandora do not censor content, instead providing an “explicit filter” if listeners choose to block explicit content.

    It looks like Apple wants to keep Beats 1 family friendly. This really shouldn’t surprise anyone, considering Apple’s draconian regulation of App Store apps.

  • Reddit Unveils New Anti-Harassment Policy, But What Does It Actually Do?

    Reddit Unveils New Anti-Harassment Policy, But What Does It Actually Do?

    Reddit, a site that’s basically run by volunteer moderators with minimal tinkering from the higher-ups, says it’s going to get more serious about dealing with harassment.

    “We have been looking closely at the conversations on reddit and at personal safety. We’ve always encouraged freedom of expression by having a mostly hands-off approach to content shared on our site. Volunteer moderators determine and uphold rules for content in their subreddits, and we have stepped in when we see threats to our values of privacy and safety,” says the reddit team in a recent post.

    “In the past 10 years we’ve seen how these policies have fostered cool and amazing conversations on reddit. We’ve seen new types of conversations as AMAs and /r/askscience and /r/askhistorians developed. We’ve seen more and more organic content as part of conversations after the introduction of self-posts. We’ve also seen the scope and scale of discussions explode.

    “Unfortunately, not all the changes on reddit have been positive. We’ve seen many conversations devolve into attacks against individuals. We share redditors’ frustration with these interactions. We are also seeing more harassment and different types of harassment as people’s use of the Internet and the information available on the Internet evolve over time. For example, some users are harassing people across platforms and posting links on reddit to private information on other sites.”

    In response to this, and a survey which, according to reddit, proves that users think harassment is a big problem, reddit is finally putting into words what constitutes harassment – at least in its eyes.

    This is how harassment is now defined in reddit’s terms:

    Systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.

    As with Twitter and other social media networks that have been dealing with how to battle online harassment, reddit relies on self-reporting. if you see harassment taking place, either against you or another user, you are tasked with reporting it to reddit.

    What this means, practically, is that “when someone reports harassment we will investigate thoroughly rather that leaving it to moderators and respond based on the nature of the harassment,” according to reddit.

    Reddit’s always been a hard one to figure out. For a site that’s always espoused free speech as a fundamental principle, it’s clear that the admins are conflicted about what that actually means. Why are some subreddits banned while other, seemingly similar or more offensive subreddits allowed to exist? Why is r/niggers banned while r/greatapes and r/coontown are up and running, for instance? Where’s the line between simply offensive and harassment?

    As you would expect, some redditors are none too thrilled about this announcement, as they feel it lays the groundwork to ban subreddits like r/fatpeoplehate:

    I hope we aren’t trying to become Tumblr. The internet isn’t a safe space. It never has been and hopefully never will be – safe is boring, heavily regulated and Brave New Worldish.

    I don’t like personal attacks either – but this appears to be your grounds to ban subs like /r/fatpeoplehate and /r/fatlogic or /r/CandidFashionPolice .

    You truly didn’t clarify what actions you plan to take to stop harassment. Its either a toothless policy OR a policy absent clear standards/transparency. . .

    “You know what inspired reddit? Speakers Corner’s in London,” says reddit co-founder Alexis Ohanian in response to that. I studied abroad in London for a semester and it really inspired me (I came back States-side and started a phpbb forum and then a year later Steve and I made reddit). It’s a place where literally anyone can get on a soapbox and talk about what matters to them. I listened to Iraqis (2003) argue for AND against the Iraq war, heard a really hateful speech by the Nation of Islam, was moved by a woman talking about the need for better mental health treatment in the UK, watched a man argue for Gay Rights standing across from a VERY conservative christian telling him he’d burn in hell. reddit should be a place where anyone can pull up their soapbox and speak their mind, or have a discussion and maybe learn something new and even challenging or uncomfortable, but right now redditors are telling us they sometimes encounter users who use the system to harass them and that’s a problem.”

    Another user thinks everything should be left to the mods:

    Don’t ‘keep everyone safe’. This isn’t Facebook, reddit is a free speech platform and I don’t think that the omniscient mods like /u/kn0thing should be able to dictate to subreddits how they should handle their community. Censorship should be the subreddit’s decision. If we feel that some sub’s should be silenced then we are no better than they are.

    “This is not what we’re proposing. We made reddit so that as many people as possible could speak as freely as possible — when our userbase is telling us that harassment is a huge problem for them and it’s effectively silencing or keeping people off the site, it’s a problem we need to address,” responds Ohanian.

    Reddit’s higher-ups have stepped in and banned subreddits before. But Reddit doesn’t really have a lot of rules. No harassing other users now joins no spam, no vote manipulation, no posting of personal info, no child porn, no revenge porn, and no messing with the site itself as the only rules. This is a place where r/sexyabortions, r/picsofdeadkids, r/cutefemalecorpses, and r/gasthekikes are allowed to thrive.

    Sure, there will be plenty of debate on what constitutes harassment. For instance, the aforementioned r/fatpeoplehate. If users aren’t specifically targeting other users or dealing in personal information, is it ok?

    We’ll see how this one plays out. If reddit starts banning a bunch of controversial subreddits because of “increased reports of harassment” or something, then this might be a bigger story. But even reddit says this shouldn’t really affect anything:

    “This change will have no immediately noticeable impact on more than 99.99% of our users. It is specifically designed to prevent attacks against people, not ideas. It is our challenge to balance free expression of ideas with privacy and safety as we seek to maintain and improve the quality and range of discourse on reddit.”

    Some will see this as a PR move from a site that’s continually admonished for being a lawless wasteland, yet has no real intentions of doing anything to reign itself in. Some will see this as a much-needed clarification on how reddit plans to curb rampant harassment. Some will see it as an assault on free speech. Some won’t see it at all.

    If you want to debate it on reddit, you can.

    Image via Blake Patterson, Flickr Creative Commons

  • Facebook Smartly Adds Warnings to Graphic Videos

    When it comes to dealing with violent and/or potentially offensive content, Facebook has made a lot of missteps. Now, the biggest social network in the world is looking to find a satisfactory medium between a completely hands-off approach and stifling gatekeeping that would (and has in the past) elicited cries of censorship.

    Facebook is beginning to show warnings on top of content flagged as graphic, forcing users to agree to continue before watching or viewing said content. The company is also looking to restrict all such content among its younger user base (13-17).

    The past couple of years have seen Facebook flip and flop around when it comes to how the company wants to deal with graphic content on the site. In 2013, Facebook bowed to public outrage, online petitions, and harsh criticism from family groups and made the decision to ban a graphic beheading video that had been circulating around the site.

    Fast forward a few months, and Facebook was singing a different tune. The company reversed the ban on the video, and in doing so instituted a new policy to govern similar content.

    Soon after, Facebook made an official change to its community standards. Here’s Facebook’s current stance on graphic content:

    Facebook has long been a place where people turn to share their experiences and raise awareness about issues important to them. Sometimes, those experiences and issues involve graphic content that is of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, it is to condemn it. However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site.

    When people share any content, we expect that they will share in a responsible manner. That includes choosing carefully the audience for the content. For graphic videos, people should warn their audience about the nature of the content in the video so that their audience can make an informed choice about whether to watch it.

    But here’s the thing – expecting people to share content in a responsible manner and hoping that they’ll warn people that they’re about to see someone’s head being chopped off is naive at best. Facebook isn’t naive about these sorts of things – not really. That’s why the company laid the groundwork for this latest move way back in 2013.

    “First, when we review content that is reported to us, we will take a more holistic look at the context surrounding a violent image or video, and will remove content that celebrates violence. Second, we will consider whether the person posting the content is sharing it responsibly, such as accompanying the video or image with a warning and sharing it with an age-appropriate audience,” said Facebook at the time. And the company did experiment with warnings for graphic content – but they never went wide.

    Now, it appears they are. A new warning is reportedly appearing for some on top of a video of the death of policeman Ahmed Merabet, who was killed in Paris by a terrorist involved in the Charlie Hebdo attacks.

    “Why am I seeing a warning before I can view a photo of video?” asks a recently-posted question on Facebook’s help page.

    “People come to Facebook to share their experiences and raise awareness about issues that are important to them. To help people share responsibly, we may limit the visibility of photos and videos that contain graphic content. A photo or video containing graphic content may appear with a warning to let people know about the content before they view it, and may only be visible to people older than 18,” says Facebook.

    A Facebook spokesperson told the BBC that “the firm’s engineers were still looking to further improve the scheme” which could “include adding warnings to relevant YouTube videos.”

    Apparently, Facebook was pressured both externally and internally – from its safety advisory board – to do something more to protect users (especially kids) from graphic content.

    Of course, there’s a whole other group of people that Facebook is worried about protecting.

    Video is a-boomin’ on Facebook. Facebook serves, on average, over a billion video views per day – almost one per user – and in the past year, the number of video posts per person has increased 75% globally and 94% in the US. And this is important to advertisers. What’s also important to advertisers? That their smoothie ads aren’t running up against beheading videos.

    Adding warnings to graphic content in a smart move. Not only does it allow Facebook to allow the content on the site and thus dodge the “free speech!” cries, but it lets advertisers feel more safe about advertising on the site. It also puts the onus on users – hey, we told you it was bad but you clicked anyway … your choice!

    Remember, Facebook isn’t a haven for free speech. It never will be. Facebook doesn’t owe you free expression. The company can do whatever it wants and censor as much content as it pleases. Considering that, a little warning before graphic content is better than no content at all, right?

    Image via Mark Zuckerberg, Facebook

  • Is The Right To Be Forgotten Dangerous?

    Google has released its latest Transparency Report, which as of earlier this year, now looks at URL removal requests from the highly-publicized Right to be Forgotten ruling in Europe. The inventor of the World Wide Web recently spoke out against the ruling, calling it dangerous. Meanwhile, the requests continue to roll in, and other parts of the world may start being affected.

    Do you agree that the Right to be Forgotten is a dangerous thing, or do you think it’s the right way for the Internet to work? Share your thoughts in the comments.

    Back in October, when Google first revealed its Right to be Forgotten removal request data in the Transparency report, it said it had evaluated 497,695 URLs for removal and received a total of 144,954 requests.

    The latest data has the numbers at 684,419 URLs evaluated and a total of 189,238 requests.

    On the Transparency Report site, Google also gives examples of requests it encounters. One involves a woman that requested Google remove a decades-old article about her husband’s murder, which included her name. The page has been removed for search results for her name.

    In another example, a financial professional in Switzerland asked Google to remove over 10 links to pages reporting on his arrest and conviction for financial crimes. Google did not remove pages from search results in those cases.

    A rape victim in Germany asked Google to remove a link to a newspaper article about the crime, which Google did in search results for the person’s name.

    According to the company, the sites that are most impacted by the URL removals are Facebook, ProfileEngine, YouTube, Badoo, Google Groups, Yasni.de, Wherevent.com, 192.com, yasni.fr, and yatedo.fr.

    One of the latest to speak out against the situation was none other than Tim Berners-Lee, the guy responsible for the World Wide Web. Via CNET:

    “This right to be forgotten — at the moment, it seems to be dangerous,” Berners-Lee said Wednesday, speaking here at the LeWeb conference. “The right to access history is important.”

    In a wide-ranging discussion at the conference, Berners-Lee said it’s appropriate that false information should be deleted. Information that’s true, though, is important for reasons of free speech and history, he said. A better approach to the challenge would be rules that protect people from inappropriate use of older information. An employer could be prohibited from taking into account a person’s juvenile crimes or minor crimes more than 10 years old, for example.

    The EU recently put forth some guidelines for the Right to be Forgotten, for search engines to work with, though they don’t go very far in terms of quelling the biggest concerns many have with the ruling, such as Berners-Lee’s.

    The Right to be Forgotten appears to be creeping out of Europe, and into other parts of the world. Consider this from earlier this month from Japan Times:

    Yes. In a possible first in Japan, the Tokyo District Court in October issued an injunction ordering Google to remove the titles and snippets to websites revealing the name of a man who claimed his privacy rights were violated due to articles hinting at past criminal activity.

    Tomohiro Kanda, who represented the man, said the judges clearly had the European court’s ruling in mind when they ordered Google to take down the site titles and snippets. Google has since deleted search results deemed by the court as infringing on the man’s privacy, Kanda said.

    But generally speaking, Japanese judges have yet to reach a consensus on how to balance the right to privacy and the freedom of expression and of information.

    Regulators in Europe have also been calling to have URLs removed from Google’s search engines worldwide rather than just from the European versions of Google.

    Are you concerned with the Right to be Forgotten? Let us know in the comments.

  • Google Slams MPAA For Trying To Resurrect SOPA, Censor The Internet

    Google Slams MPAA For Trying To Resurrect SOPA, Censor The Internet

    Google says it’s “deeply concerned” about reports that the MPAA has been secretly leading a campaign to revive SOPA (the Stop Online Piracy Act) and help “manufacture legal arguments” in connection with an investigation by Mississippi State Attorney General Jim Hood.

    Google is referring to the campaign as “ZombieSOPA,” and has started a campaign of its own:

    SOPA was defeated nearly three years ago in no small part thanks to widespread protest from Internet users and websites (115,000 of them).Opponents maintained that the legislation would have led to widespread censorship. According to Google, Congress received over 8 million phone calls and 4 million emails in protest of SOPA in a single day. That’s in addition to 10 million petition signatures.

    In a blog post, Google walks through some of the recent reports, and writes:

    Even though Google takes industry-leading measures in dealing with problematic content on our services, Attorney General Hood proceeded to send Google a sweeping 79-page subpoena, covering a variety of topics over which he lacks jurisdiction. The Verge reported that the MPAA and its members discussed such subpoenas and certainly knew about this subpoena’s existence before it was even sent to Google.

    Attorney General Hood told the Huffington Post earlier this week that the MPAA “has no major influence on my decision-making,” and that he “has never asked [the] MPAA a legal question” and “isn’t sure which lawyers they employ.” And yet today the Huffington Post and the Verge revealed that Attorney General Hood had numerous conversations with both MPAA staff and Jenner & Block attorneys about this matter.

    The company says it has “serious legal concerns about all of this.” It also points to a quote from the MPAA’s website about how the organization aims to preserve free speech, but is trying to censor the Internet.

    Image via Twitter