The International Business Times (IBT) is reporting that the Australian Competition and Consumer Commission (ACCC) has filed suit against Google claiming the tech giant misleads consumers about how it collects and uses their data.
The ACCC claims Google used “highly sensitive and valuable personal information” without properly informing consumers and giving them the opportunity to make a choice. According to the ACCC, Google used misleading on-screen prompts and labels regarding what information was being collected. The tech giant claimed that customers’ data would only be used for personal purposes and to make sure Google’s services worked properly when, in fact, the collected was used elsewhere.
According to the complaint, between 2017 and 2018, users who did not turn off the “location history” and “web & app activity” settings had their data collected and used.
ACCC chairman Adam Sims said: “We’re also alleging that some of the behaviour is continuing. We want declarations that the current behaviour should not continue.”
Mr. Sims said the ACCC was seeking “significant penalties,” as well as an admission from Google that its behavior was “inappropriate.” The case will likely be watched closely by similar agencies around the world, as Google and Facebook are already under scrutiny for their handling of consumer data.
Twitter is the latest company to admit to a privacy faux pas. In a statement Twitter informed users that phone numbers and email addresses intended for security may have inadvertently been used for advertising purposes.
Twitter uses Tailored Audiences, a customized version of an industry-standard platform designed to allow “advertisers to target ads to customers based on the advertiser’s own marketing lists (e.g., email addresses or phone numbers they have compiled). Partner Audiences allows advertisers to use the same Tailored Audiences features to target ads to audiences provided by third-party partners.”
Twitter goes on to say they may have matched users with their marketing list based on contact info that was provided for security purposes, such as two-factor authentication. The company is unsure how many users were impacted, although they are confident no data was shared with third-parties, including their advertising partners.
“As of September 17, we have addressed the issue that allowed this to occur and are no longer using phone numbers or email addresses collected for safety or security purposes for advertising.
“We’re very sorry this happened and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”
Few technologies are more controversial and divisive as facial recognition. Customers have come to rely on it to log into their phones and tablets, police and government agencies are increasingly using it to identify suspects and privacy advocates decry it as an unconstitutional invasion of people’s rights.
Amazon has established itself as a leader in the field of facial recognition with its Rekognition software. While the software is widely used by police, as well as government agencies such as Immigration and Customs Enforcement (ICE), it has not escaped controversy. The ACLU has twice used Rekognition on photos of politicians, each time with dozens of false matches. In both instances, however, Amazon responded by pointing out that the ACLU left the confidence setting at the default 80 percent threshold, instead of the 99 percent threshold Amazon recommends for law enforcement.
Nonetheless, Amazon can see the writing on the wall and knows it’s only a matter of time before facial recognition is regulated. Needless to say, it’s in Amazon’s best interests for those regulations to favor companies who profit off of the technology. To that end, Vox is reporting that Amazon is drafting laws to regulate facial recognition, which they plan on pitching to lawmakers.
According to Vox, CEO Jeff Bezos told reporters that the company’s “public policy team is actually working on facial recognition regulations; it makes a lot of sense to regulate that.
“It’s a perfect example of something that has really positive uses, so you don’t want to put the brakes on it. But, at the same time, there’s also potential for abuses of that kind of technology, so you do want regulations. It’s a classic dual-use kind of technology.”
While skeptics are understandably concerned that Amazon’s foray into legislation may do little to nothing to protect the rights of everyday citizens, only time will tell if Amazon’s efforts are sincere or just another step toward a more Orwellian outcome.
Google received a welcome victory in the European Union’s highest court in relation to the EU’s “right to be forgotten” rules.
For the past five years, EU citizens have enjoyed the right to have search engines remove embarrassing or outdated information from their indexes. At the heart of the case was whether the right to be forgotten extended beyond EU borders.
The CNIL, a French privacy regulatory body, had ordered Google to expand the right to be forgotten globally. Google resisted, citing concerns that authoritarian regimes in other parts of the world would abuse the feature to cover up crimes and human rights violations. The CNIL eventually tried to levy a $109,901 fine.
Ultimately, however, the EU ruled that the right to protect personal data was not an absolute right, and that there was no obligation for Google—or any other search engine—to delist search results outside of the EU.
While Google praised the decision, it is not without complications. Google has relied on geoblocking to ensure EU citizens do not see delisted search results. Geoblocking does not, however, mean that the search results are not there—it simply means they are not accessible to someone within the EU. A person based in the U.S., or anywhere else outside of Europe, would still be able to find the very things someone in Europe may have filed to have delisted and will never see themselves.
Despite the complications, the ruling was welcomed within the tech community. The case was being watched closely to see how much authority the EU had to impose regulations on a U.S. company. Had Google lost, the long-term implications for Google and every other tech firm would have been profound—and gone far beyond mere search results.
On September 10, Apple held their “By Invitation Only” event, unveiling new iPhones, Apple Arcade, Apple TV+ and iOS 13. While many of the event’s announcements were aimed squarely at the consumer market, there were a number of things surrounding the iOS update that impact businesses, especially when it comes to marketing and development.
For years, despite Apple working to protect customers’ privacy, companies have found ways to continue tracking iPhone users, often without their permission. Recent news articles have highlighted how companies are using Bluetooth to track individuals. Similarly, some apps try to use GPS to track people even when not using the app. Facebook is one such company that was recently busted for using precise location data to track users without their knowledge or consent.
As a result, in addition to iOS 13’s general facelift and improvements, there are a number of privacy-related features that will likely have a significant impact on marketing and development teams who have previously relied on these tracking methods.
Bluetooth Tracking
One of the biggest changes to iOS 13 is how Bluetooth connections are handled. Prior to this update, apps could access the iPhone’s Bluetooth functionality to track a user’s whereabouts thanks to tracking beacons. Customers in a store could be tracked as they walked around to different sections, giving the store information about what displays and product categories were driving the most foot traffic. Similarly, shopping malls can use beacons to track individuals and determine movement patterns, store popularity and more.
That’s not to say that all apps requesting Bluetooth access are using it for tracking. Smartwatches, health monitors and the like will need to connect to their corresponding apps via Bluetooth. But it’s clear that many apps don’t need access. For example, Dominoes and Macy’s are two apps that request access for the purpose of tracking users’ whereabouts.
For marketing firms and departments who have relied on this technology, iOS 13 represents the end of an era. Instead of using Bluetooth to track individuals without their consent, marketing departments will need to find other ways to engage with customers. In some cases, this may involve adding an incentive for the customer to allow tracking. In other cases, it may require adding a check-in option instead of automatic tracking.
iOS 13’s change has significant implications for development teams as well. In many cases, Bluetooth tracking functionality was included in various software development kits (SDK) as a bundled benefit of using that particular SDK. With more and more customers choosing not to be tracked, developers will need to find other ways to make their SDKs stand out and provide value to their customers.
GPS Tracking
GPS tracking is another area where some companies have abused consumer trust. Facebook and others have been accused of using precise location data to track users, even when the app is not active.
iOS 13 offers an updated option to GPS permissions. In addition to “Don’t Allow” and “Allow While Using App,” iOS 13 includes an “Allow Once” option. When a user chooses this, the app is granted a one-time access to GPS functionality and the user will be prompted to give it access again the next time they open that app.
Again, for companies whose apps rely on GPS functionality, it will be increasingly important to ensure their app is using GPS for a specific reason, to offer their customers an improved experience. Otherwise, if an app’s request for GPS access is suspect or without a clear reason, customers may switch to an app that respects their privacy.
Wrapping Up
As with many iOS updates, iOS 13 brings a number of significant changes, not the least of which is improved customer privacy. This will undoubtably present a challenge to some businesses, not only those who may have been abusing these features in the past, but also businesses whose apps will simply be collateral damage in the battle to protect user privacy.
On the other hand, companies who are quick to adapt, work to protect user privacy and look for new ways to engage their customers will likely find new opportunities open to them.
“5G brings a couple of things,” says Avast CEO Ondrej Vlcek. “One is the density of the network which is enabling things like IoT, the Internet of Things. That’s an exciting thing but also poses some new security risks. Second is speed of connectivity which we all want and which we all sort of are hoping to get better. But in terms of timing, it kind of differs geo by geo. East Asia is always ahead in that regard. In Europe, we can realistically expect something within two or three years.”
Ondrej Vlcek, CEO of Avast, discusses new security risks with 5G and how privacy is becoming a big part of their business in a conversation on Bloomberg:
5G Poses Some New Security Risks
There were really two drivers (to our earnings results this quarter). The first one was our consumer direct segment, desktop direct, which grew 12.5 percent. The second was consumer indirect, which is actually powered by both the Jumpshot business that we have as well as the Secure Browser. These were kind of the two main things.
5G brings a couple of things. One is the density of the network which is enabling things like IoT, the Internet of Things. That’s an exciting thing but also poses some new security risks. Second is speed of connectivity which we all want and which we all sort of are hoping to get better. But in terms of timing, it kind of differs geo by geo. East Asia is always ahead in that regard. In Europe, we can realistically expect something within two or three years.
Privacy Is The Other Side Of The Security Coin
I think privacy is a new category. We see it as the other side of the security coin. We are heavily investing in creating privacy-oriented solutions. So actually our portfolio today is not just security, antivirus protection is now actually less than half of our business. Now the second half is made of tools like privacy controls because we see a big opportunity. At the same time, the need is real. Consumers are more and more realizing there are privacy risks in what they are doing online and there is something that needs to be done about that.
I got sort of inspired by the captains from the Silicon Valley such as Google and Facebook. So I gave up my salary and my bonus and I’m only getting compensated by stock which I think is the right thing for the CEO to do. Clearly, my objective is to keep the company growing. We’ve got a great runway and I’m very optimistic, being new in the role and seeing the opportunities. This is a good position to be in.
“The long-term goal (with Alexa) was to try to invent the Star Trek computer,” says David Limp, Amazon’s SVP of devices and services. “I grew up watching Roddenberry and loved it. We all loved watching it and the science had moved up enough where we thought we had a shot at it. It’s still going to take us years, if not decades more, to get to the shining star that is that Star Trek computer. But we think we can do it.”
David Limp, SVP of Devices & Services at Amazon, discusses the future of devices and Amazon’s role in building trust and protecting privacy in an interview on CNBC at the Amazonre:MARS conference in Las Vegas:
Long-Term Goal With Alexa Is To Invent the Star Trek Computer
The long-term goal (with Alexa) was to try to invent the Star Trek computer. I grew up watching Roddenberry and loved it. It was a lot more innocent than you might make it out to be. Which is, can we invent that computer? We all loved watching it and the science had moved up enough where we thought we had a shot at it. It’s still going to take us years, if not decades more, to get to the shining star that is that Star Trek computer. But we think we can do it.
If you have that in your house or in your car or in your conference room, you’re going to find all sorts of things to do with it. Some, Amazon will invent and it’ll help Amazon. But much more, it’ll help developers. There are 90,000 plus skills and hundreds of thousands of developers building around Alexa right now. If you’d five years ago said there’s going to be a new developer ecosystem that’s not about an operating system and that’s not about applications, but about skills in the cloud, you would have laughed at me. But here it is sitting in front of us, all around us, right here.
Our Focus Is To Invent On Behalf Of Customers
Our focus is to invent on behalf of customers. If we keep our focus there and build cool things that customers love to use and continue to earn their trust, which we have to do every day, then we think the outputs will speak for themselves. We focus on that. Customer trust is kind of the oil of the Amazon flywheel. We think about it every day. It’s thinking about privacy as you think about the kinds of products that we’re doing. Whether it’s a Ring doorbell or it’s an Echo sitting in your kitchen, it has to be foundational to the product. It’s not something you glom on later as an afterthought.
We think about it at the very upfront when we’re beginning to invent the product. We’re gonna put these in our homes. What do we want to think about privacy? What do we think about trust? We build features into the products and into the services where (those concepts) are first and foremost and paramount. We’re continuing to evolve that as well. It’s not like you’re going to get everything right day one. As we learn from customers we’ll add new features and services that build on that and add more privacy and trust as we go on.
The First Thing Is To Get Customers To Love A Product
The first thing is to get customers to love a product. If you build a product that customers love and use then good things usually come in consumer electronics when you do that. For us, that’s the first thing that you want to do. It happened early on with Kindle. People loved it and then we figured out how to build a book business around it. Similarly, the great thing about Echo and Alexa is that customers love the product.
I don’t think that they’re necessarily buying more yet because of that but they are doing certain things in digital that leads to buying some more things. Specifically, we’ve kind of brought music back into the home again. It had an atrophied in the home. Now music subscription services, Spotify, Amazon music, and Apple music starting last year. They’re growing on Echo and Alexa. People are listening to audiobooks. We have a business there in Audible with the subscription services. Those are the early signs where you start seeing that. In addition, people are buying more smart home products. Whether it’s a smart plug or a light bulb or a robotic vacuum, people are buying those more because it’s easier to control with a voice interface.
Anything That Advances Privacy For Customers, I’m a Fan Of
Anything that advances privacy for customers and gives them a more trusted environment, I’m a big fan of as a consumer. I don’t know enough about that product (announced on Monday by Apple) to weigh in on the specifics of it. As you think about Amazon and our credentials and being able to log on to Amazon, we’ve been doing that for 20 plus years. Your credit card number and your address which we ship your products to, that’s sacrosanct. We have to build trust every day. Any other company or any other person that’s furthering that I think it’s just great for the industry.
According to a Reddit post, Twitter still scrapes a lot of personalized data from you even with all personalized ads settings disabled. Reddit user u/dangeredwolf says that he noticed this personally and soon realized that Twitter’s own privacy policies actually allows Twitter to use certain personal information. This includes what you say in Tweets, who you follow, your type of phone, where you are located, and what you click. It does make you wonder what personal information they are using when you opt-in!
Apple CEO Tim Cook spoke at 2018 International Conference of Data Protection and Privacy Commissioners in Brussels last night and gave a bold and possibly controversial privacy speech. Cook directly challenged Facebook, and tech companies in general, to change their perspective on privacy. He also said that Apple is fully supportive of ‘a comprehensive federal privacy law in the United States.’
Below is the full text of Apple CEO Tim Cook’s speech followed by the full video embed:
Apple CEO Tim Cook – The Privacy Speech
It is an honor to be here with you today in this grand hall, a room that represents what is possible when people of different backgrounds, histories, and philosophies come together to build something bigger than themselves. I am deeply grateful to our hosts. I want to recognize Ventsislav Karadjov for his service and leadership. And it’s a true privilege to be introduced by his co-host, a statesman I admire greatly, Giovanni Butarelli.
Now Italy has produced more than its fair share of great leaders and public servants. Machiavelli taught us how leaders can get away with evil deeds, and Dante showed us what happens when they get caught.
You Set an Example for the World
Giovanni has done something very different. Through his values, his dedication, his thoughtful work, Giovanni, his predecessor Peter Hustinx, and all of you have set an example for the world. We are deeply grateful.
We need you to keep making progress, now more than ever. Because these are transformative times. Around the world, from Copenhagen to Chennai to Cupertino, new technologies are driving breakthroughs in humanity’s greatest common projects. From preventing and fighting disease, to curbing the effects of climate change, to ensuring every person has access to information and economic opportunity.
We See Vividly, Painfully, How Technology Can Harm Rather Than Help
At the same time, we see vividly, painfully, how technology can harm rather than help. Platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies. Rogue actors and even governments have taken advantage of user trust to deepen divisions, incite violence, and even undermine our shared sense of what is true and what is false.
This crisis is real. It is not imagined, or exaggerated, or crazy. And those of us who believe in technology’s potential for good must not shrink from this moment. Now, more than ever, as leaders of governments, as decision-makers in business, and as citizens, we must ask ourselves a fundamental question: What kind of world do we want to live in? I’m here today because we hope to work with you as partners in answering this question.
Technology Doesn’t Want To Do Great Things – That Part Takes Us
At Apple, we are optimistic about technology’s awesome potential for good. But we know that it won’t happen on its own. Every day, we work to infuse the devices we make with the humanity that makes us. As I’ve said before, technology is capable of doing great things, but it doesn’t want to do great things. It doesn’t want anything. That part takes all of us.
That’s why I believe that our missions are so closely aligned. As Giovanni puts it, “We must act to ensure that technology is designed and developed to serve humankind and not the other way around.”
Privacy is a Fundamental Human Right
We at Apple believe that privacy is a fundamental human right. But we also recognize that not everyone sees it that way. In a way, the desire to put profits over privacy is nothing new.
As far back as 1890, future Supreme Court Justice Louis Brandeis published an article in the Harvard Law Review making the case for a “Right to Privacy” in the United States. He warned, “Gossip is no longer the resource of the idle and of the vicious, but has become a trade.”
Our Own Information is Being Weaponized Against Us
Today that trade has exploded into a data industrial complex. Our own information, from the every day to the deeply personal, is being weaponized against us with military efficiency. Every day, billions of dollars change hands, and countless decisions are made, on the basis of our likes and dislikes, our friends and families, our relationships and conversations, our wishes and fears, our hopes and dreams.
These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. Taken to its extreme, this process creates an enduring digital profile and lets companies know YOU better than YOU may know yourself.
We Shouldn’t Sugarcoat the Consequences… This is Surveillance
Your profile is then run through algorithms that can serve up increasingly extreme content, pounding our harmless preferences into hardened convictions. If green is your favorite color, you may find yourself reading a lot of articles or watching a lot of videos about the insidious threat from people who like orange.
In the news, almost every day, we bear witness to the harmful, even deadly, effects of these narrowed worldviews. We shouldn’t sugarcoat the consequences. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable. It should unsettle us. And it illustrates the importance of our shared work and the challenges still ahead of us.
We Support a Comprehensive Federal Privacy Law in the US
Fortunately, this year, you’ve shown the world that good policy and political will can come together to protect the rights of everyone. We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR. We also celebrate the new steps taken, not only here in Europe, but around the world. In Singapore, Japan, Brazil, New Zealand, and many more nations, regulators are asking tough questions and crafting effective reforms.
It is time for the rest of the world, including my home country, to follow your lead. We at Apple are in full support of a comprehensive federal privacy law in the United States. There and everywhere, it should be rooted in four essential rights.
First, the right to have personal data minimized. Companies should challenge themselves to de-identify customer data, or not to collect it in the first place. Second, the right to knowledge. Users should always know what data is being collected and what it is being collected for. This is the only way to empower users to decide what collection is legitimate and what isn’t. Anything less is a sham.
Third, the right to access. Companies should recognize that data belongs to users, and we should all make it easy for users to get a copy of, correct, and delete their personal data. And fourth, the right to security. Security is foundational to trust and all other privacy rights.
There Are Those Who Would Prefer I Hadn’t Said All of That
Now, there are those who would prefer I hadn’t said all of that. Some oppose any form of privacy legislation. Others will endorse reform in public and then resist and undermine it behind closed doors. They may say to you, ‘our companies will never achieve technology’s true potential if they are constrained with privacy regulation.’ But this notion isn’t just wrong, it is destructive.
Technology’s potential is, and always must be, rooted in the faith people have in it, in the optimism and creativity that it stirs in the hearts of individuals, and in its promise and capacity to make the world a better place. It’s time to face facts. We will never achieve technology’s true potential without the full faith and confidence of the people who use it.
At Apple, Respect for Privacy and Suspicion of Authority Are in Our Blood
At Apple, respect for privacy and a healthy suspicion of authority have always been in our bloodstream. Our first computers were built by misfits, tinkerers, and rebels, not in a laboratory or a boardroom, but in a suburban garage. We introduced the Macintosh with a famous TV ad channeling George Orwell’s 1984, a warning of what can happen when technology becomes a tool of power and loses touch with humanity.
And way back in 2010, Steve Jobs said in no uncertain terms, “Privacy means people know what they’re signing up for, in plain language, and repeatedly. It’s worth remembering the foresight and courage it took to make that statement.
When we designed this device we knew it could put more personal data in your pocket than most of us keep in our homes. And there was enormous pressure on Steve and Apple to bend our values and to freely share the information. But we refused to compromise.
In fact, we’ve only deepened our commitment in the decade since. From hardware breakthroughs that encrypt fingerprints and faces securely and only on your device to simple and powerful notifications that make clear to every user precisely what they’re sharing and when they are sharing it. We aren’t absolutists, and we don’t claim to have all the answers. Instead, we always try to return to that simple question: What kind of world do we want to live in?
At every stage of the creative process, then and now, we engage in an open, honest, and robust ethical debate about the products we make and the impact they will have. That’s just a part of our culture. We don’t do it because we have to, we do it because we ought to. The values behind our products are as important to us as any feature.
The Dangers Are Real From Cyber-Criminals to Rogue Nation States
We understand that the dangers are real from cyber-criminals to rogue nation states. We’re not willing to leave our users to fend for themselves. And, we’ve shown, we’ll defend them, we will defend our principles when challenged.
Those values, that commitment to thoughtful debate and transparency, they’re only going to get more important. As progress speeds up, these things should continue to ground us and connect us, first and foremost, to the people we serve.
For AI to be Truly Smart, It Must Respect Human Values
Artificial Intelligence is one area I think a lot about. Clearly, it’s on the minds of many of my peers as well. At its core, this technology promises to learn from people individually to benefit us all. Yet advancing AI by collecting huge personal profiles is laziness, not efficiency. For Artificial Intelligence to be truly smart, it must respect human values, including privacy.
If we get this wrong, the dangers are profound. We can achieve both great Artificial Intelligence and great privacy standards. It’s not only a possibility, it is a responsibility. In the pursuit of artificial intelligence, we should not sacrifice the humanity, creativity, and ingenuity that define our human intelligence. And at Apple, we never will.
In the mid-19th Century, the great American writer Henry David Thoreau found himself so fed up with the pace and change of Industrial society that he moved to a cabin in the woods by Walden Pond. Call it the first digital cleanse.
Yet even there, where he hoped to find a bit of peace, he could hear a distant clatter and whistle of a steam engine passing by. “We do not ride on the railroad,” he said. “It rides upon us.”
Those of us who are fortunate enough to work in technology have an enormous responsibility. It is not to please every grumpy Thoreau out there. That’s an unreasonable standard, and we’ll never meet it. We are responsible, however, for recognizing that the devices we make and the platforms we build have real lasting, even permanent effects, on the individuals and communities who use them.
What Kind of World Do We Want to Live In?
We must never stop asking ourselves, what kind of world do we want to live in? The answer to that question must not be an afterthought, it should be our primary concern. We at Apple can, and do, provide the very best to our users while treating their most personal data like the precious cargo that it is. And if we can do it, then everyone can do it.
Fortunately, we have your example before us. Thank you for your work, for your commitment to the possibility of human-centered technology, and for your firm belief that our best days are still ahead of us.
Should tech companies be hiring Chief Ethics Officers in order to get better at self-examination? That’s the question posed by Kara Swisher, internet pioneer and Recode editor at large, in a New York Times column today. “I think we can all agree that Silicon Valley needs more adult supervision right about now,” Swisher wrote. “Is the solution for its companies to hire a chief ethics officer?”
Kara Swisher discussed the idea of tech companies hiring Ethics Officers in an interview on CNBC (Watch Below):
Tech Companies Have Faced Many Ethics Challenges
All these companies have faced one thing after another. Whether it’s Google in China or Google with the hack that they didn’t disclose for six months or its Facebook with so many things. I mean the elections, the issues around fake news, the issues around bots, the issues around the Russians, and everything. Then there’s Twitter with Alex Jones and things like that.
Swisher Proposes Idea of Hiring Chief Ethics Officers
These are issues that are hitting tech and that leaves out automation and robotics and all these other issues that are societally impactful. In academics, there’s a lot of people studying this stuff. There’s a bunch of AI ethics people. There are all kinds of people actually studying these issues and the impact of social media, especially on our society.
In a previous New York Times column, I talked about the weaponization of everything, that these things amplify and weaponize things, and the people who are running these companies are ill-prepared to understand the impact of what they’ve created. I would like them to have people around them that will allow them to think about that. You could do this for Wall Street, you could do this for the defense industry, you could do this for a lot.
Mark Zuckerberg Was Ill-Prepared to Deal with the Impacts
These are people who say they’re changing the world and always touting themselves as the better thing, when in fact a lot of their inventions are very damaging to society. Think about someone like Mark Zuckerberg who didn’t complete college, never took a humanities course, has not been schooled in this.
I know he’s been trying to learn a lot of things since then, but here’s one person who has complete control over this company who is ill-prepared to deal with some of the impacts.
Tech Execs Don’t Reflect at Anything They Do
I think they don’t think about it at all. They’re so non-self reflective it’s a miracle they can see in the mirrors in Silicon Valley, they’re like vampires. They don’t reflect at anything they do.
I had a really interesting interview with the woman who was a lawyer for Google and Twitter in the early days and she explained how you create the pillars of your creation. If you put things around virality you’re gonna get fake news. If you put it around relevance and truthfulness you get a different outcome. They’ve been designing for speed and virality and it moves into the things, the problems we have.
Tech Companies Are Causing Damage All Over the World
What you have to do is you have to figure out when you’re designing these companies the possible implications. That’s why I say, there’s not gonna be a Chief Ethics Officer, they don’t want anyone to slow the breaks and this is someone who would slow the breaks and say maybe we should pay more, maybe we should have human moderators, what’s going on here in Myanmar, what about India?
They have to start considering these because they’re causing damage all over the world and these technologies have massive impact and in the future are going to have even more so.
The Internet exists to collect your data. We all should realize that everything we do is recorded into huge data sets in order for advertisers to target us better, for online stores to sell us stuff more effectively and for government agencies to know what we are up to. That’s why the Intenet attracts investors and why it is financially viable as a platform. Unfortunately, criminal enterprises are also utilizing our data to steal from us and sell our data to other criminals on the Dark Web.
In a recent Tedx Talk, Eric Jardine, Assistant Professor at Virginia Tech specializing in Internet security and privacy, dived into the difficult issue of privacy and whether the Internet should be anonymous or open to data collectors:
Growing Lack of Online Privacy
At the core of all of this, is our growing societal problem of online privacy, or really the lack thereof. Currently, through a series of seemingly innocuous choices, we’re barreling towards a world with very little individual privacy. There are tools that currently exist which could catapult us from one end of this spectrum all the way to the other. Technologies such as The Onion Router or Tor are gateways to a portion of the internet known colloquially as the Dark Web. In this realm we’re anonymous by design and our privacy is pretty much as maximum as we can get.
We don’t want to end up on either of these extremes. Instead, there are discrete practical steps that we can all take as part of our daily lives that would move the needle a little bit and land us somewhere more in the middle.
The Statistics
3.9 Billion Internet Users
2.2 Billion Facebook Users
5 Billion Daily Snapchats
300 Hours Watched on YouTube Per Minute
As we move for more of a digital content platform-centric universe into an Internet of Things reality, we move into a whole other area of interconnectivity. Projections are wild, but one study indicates that by 2025, as many as 75 billion IoT devices could be interconnected into our cars, our homes, even things as silly as our toothbrushes!
Our engagement with all of these platforms and with all of these internet connected devices generate data. That data takes two separate forms. The first is content related information or content related data, some of which could be very personal, such as sexting. it’s obvious why you wouldn’t want companies, government, or random strange individuals getting access to the content of your messages.
The other type of data is what is known as metadata, or essentially data about data. Imagine three data points. First one is a visit to a dating site, the second transaction in this record is a visit to a sexual health website a couple of days later and then following that, maybe a month later, you have a visit to an abortion clinic website. These are three solitary little data points, but already I bet you are starting to infer something about the person who’s visiting these sites and can draw a number of conclusions.
Imagine how many data points you might generate with the searches you do in a day, a week, a month, or a year? Once you leverage Big Data analytics and apply it to all of this information you get a scarily accurate picture of what people are like.
Data is the Internet’s Business Model
If that amount of data that is generated wasn’t bad enough, the entire internet ecosystem, the entire commercial world wide web, is essentially based on a business model that emphasizes the collection, aggregation, sharing and monetization of your user-generated data.
This means that there are a host of actors who have access to the data that we generate as we move through the internet ecosystem. This obviously includes ISP’s, but it also includes the content platforms we engage with on a routine basis, governments who can requisition all of this information, and third-party data brokers. In the United States alone there are upwards of 4,000 discrete data brokers whose whole job it is is to take information about you, aggregate it, and sell it to somebody for some purpose.
We have a privacy problem. We readily engage every day with an ecosystem that’s built around the absence of privacy and the use of our personal data. I think there are important reasons over and beyond just the fact that maybe we want to keep things private some of the time of why privacy remains something important and something to be cherished in society.
No Privacy Vs. Privacy
We have on one end of the privacy spectrum a problem and with the lack of privacy and on the other hand there is a suite of tools that could catapult us all the way in the other direction, where it’s possible we could become too private for society’s good. These are the tools that are part and parcel of the Dark Web, and the Dark Web is anonymous by design. And of course, there’s a problem with the ways in which humans tend to behave or react when we get total anonymity. As soon as our actions lose consequences we can tend to become unmoored from our moral sensibilities, and that’s problematic for society as a whole and probably for us as individuals as well. The Dark Web does that by design.
It says you do what you want and no one will know who you are, so it doesn’t matter. Researchers from the United Kingdom have done a great job categorizing available Dark Web content and they found a pretty grim picture. For instance, about 15% of available Dark Web content is dedicated to drug sites, 2% is dedicated to what they categorized as child abuse imagery sites.
It’s clear, that this extreme end is very problematic. You’re dealing with a cesspool of crime and that’s not a place I think a lot of us want to go.
So where does that leave us?
If we’ve lopped off the two extremes of that privacy spectrum, where do we want to land? In sort of Goldilocks fashion, we probably want to be somewhere closer to the middle of that. I would like to prioritize privacy when we can but recognize that there are limits. Then the question obviously becomes if that’s where we want to land, and currently, we’re careening towards little privacy, and the tools of Tor and the Dark Web will take us too far in the other direction, how do we get to that happy medium?
There are two things that we need to do generally speaking to get there. The first is we need to be sympathetic to privacy. Inherently, privacy is context specific and we need to be sympathetic about that. We don’t want to just default to where you are clearly are doing something wrong if you actually want some element of privacy.
Secondly, and I think a bit more problematic pragmatically, we don’t want to just give up on privacy. When you see an entire ecosystem built up around the idea that your data is what turns a buck, it’s hard to fight back against that. Facebook has 2.2 billion users, what am I going to do against that kind of behemoth?
I think within this category, there are tools that we could use to increase our privacy that are well short of Tor and the Dark Web. You could use, for example, a virtual private network or VPN. That’s a small step, but it masks part of your Internet traffic in a way that’s discoverable by law enforcement, yet hidden from Internet service providers or the companies that are trying to build profiles about you. You could also use more private search engines, like Duck Duck Go, that promise to keep your data private, as opposed to Google.
If you ever get asked why you should use DuckDuckGo instead of Google, here’s a definitive answer by @yegg:https://t.co/217OxreS4e
Beyond that, it’s important for us to recognize that individually we may be weak, but collectively we have a tremendous amount of power. We are consumers, and if we work in concert and come to a social consensus that privacy is still valuable, we can pressure the developers of the various services that we use to emphasize privacy.
Lastly, we can pressure our elected representatives and government to try to come to grips with the fact that technology is not going away and that we have a need to deal with the fact that data is going to be collected and data is going to be shared.The questions about how much data to collect, when and how long an entity should keep our data, what kind of rules govern data and who can share the data, is certainly the role of government.
Microsoft today released a 2018 version of “A Cloud for Global Good” this morning. “The beginning of a new year offers an opportunity to reflect on the past and look forward to the future,” stated Brad Smith, Microsoft President and Chief Legal Officer in a blog post.
Smith adds, “It’s in this spirit that today we are releasing an updated A Cloud for Global Good, a policy road map for governments, industry and civil society to consider as they realize the opportunities and address the challenges presented by the Fourth Industrial Revolution. This new version, which updates the edition we released in October 2016, reflects our rapidly changing world and recent advancements in artificial intelligence, machine learning, mixed reality and other cloud-enabled technologies.”
Microsoft continues to hope that this book of policy recommendations is used by governments and industry as a manifesto of sorts for inclusion and policy standardization for global good. “There is still much work to do if we are truly going to create a cloud for global good,” explains Smith. “It is a big responsibility for every government, every business and every technology company. It certainly is a big responsibility for Microsoft.”
The key tenants of A Cloud for Global Good are responsibility and inclusion. The document wants governments around the world to be more responsible in the areas of human rights, public safety, technology crime, and environmentally. It also is looking to potential future issues concerning artificial intelligence. Microsoft also is promoting governments to ensure an affordable internet, retraining for those who hold potentially automated jobs, and wants projection to insure inclusion for those with disabilities.
Facebook announced today that it no longer will allow advertisers marketing products and services related to housing, credit or employment to target their ads based on race.
Erin Egan, VP, US Public Policy and Chief Privacy Officer explained:
Recently, policymakers and civil rights leaders have expressed concerns that advertisers could misuse some aspects of our affinity marketing segments. Specifically, they’ve raised the possibility that some advertisers might use these segments to run ads that discriminate against people, particularly in areas where certain groups have historically faced discrimination — housing, employment and the extension of credit.
We take these issues seriously. Discriminatory advertising has no place on Facebook.
Facebook is making changes as follows:
Going forward, we have decided to make the following changes to our advertising products. We will:
Build tools to detect and automatically disable the use of ethnic affinity marketing for certain types of ads: We will disable the use of ethnic affinity marketing for ads that we identify as offering housing, employment, or credit. There are many non-discriminatory uses of our ethnic affinity solution in these areas, but we have decided that we can best guard against discrimination by suspending these types of ads. We will continue to explore ways that our ethnic affinity solution can be used to promote inclusion of underrepresented communities, and we will continue to work with stakeholders toward that goal.
Offer more clarification and education: We will update our Advertising Policies to be even more explicit and require advertisers to affirm that they will not engage in discriminatory advertising on Facebook, and we will offer new educational materials to help advertisers understand their obligations with respect to housing, employment and credit.
In a reversal of previous pledges, WhatsApp is going to begin sharing data with Facebook in order to connect accounts, detect spam and improve ad targeting. This includes sharing your phone number and usage information for Facebook’s internal use, but not sharing any actual texts, since they are encrypted and neither Facebook or What’s has access to them.
WhatsApp is giving existing users 30 days to grandfather themselves into not sharing their data with Facebook. After that all users will be subject to their new Terms and Privacy Policy.
Here’s how WhatsApp describes the sharing of data with Facebook in the new Terms:
We joined the Facebook family of companies in 2014. As part of the Facebook family of companies, WhatsApp receives information from, and shares information with, this family of companies. We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings. This includes helping improve infrastructure and delivery systems, understanding how our Services or theirs are used, securing systems, and fighting spam, abuse, or infringement activities.
Facebook and the other companies in the Facebook family also may use information from us to improve your experiences within their services such as making product suggestions (for example, of friends or connections, or of interesting content) and showing relevant offers and ads. However, your WhatsApp messages will not be shared onto Facebook for others to see. In fact, Facebook will not use your WhatsApp messages for any purpose other than to assist us in operating and providing our Services.
WhatsApp is seeking new ways for its users, especially businesses, to utilize WhatsApp which will open up additional magnetization opportunities in the future. They are exploring the use of WhatsApp in business transactions with customers related to online orders and sales, appointments and reservations, delivery and shipping notifications, business updates to customers related to their products and services as well as integrating WhatsApp in company marketing.
“For example, you may receive flight status information for upcoming travel, a receipt for something you purchased, or a notification when a delivery will be made,” posted WhatsApp. “Messages you may receive containing marketing could include an offer for something that might interest you. We do not want you to have a spammy experience; as with all of your messages, you can manage these communications, and we will honor the choices you make.”
“But as we announced earlier this year, we want to explore ways for you to communicate with businesses that matter to you too, while still giving you an experience without third-party banner ads and spam,” added WhatsApp.”Whether it’s hearing from your bank about a potentially fraudulent transaction, or getting notified by an airline about a delayed flight, many of us get this information elsewhere, including in text messages and phone calls. We want to test these features in the next several months, but need to update our terms and privacy policy to do so.”
“We’re also updating these documents to make clear that we’ve rolled out end-to-end encryption,” they said. “When you and the people you message are using the latest version of WhatsApp, your messages are encrypted by default, which means you’re the only people who can read them. Even as we coordinate more with Facebook in the months ahead, your encrypted messages stay private and no one else can read them. Not WhatsApp, not Facebook, nor anyone else. We won’t post or share your WhatsApp number with others, including on Facebook, and we still won’t sell, share, or give your phone number to advertisers.”
“But by coordinating more with Facebook, we’ll be able to do things like track basic metrics about how often people use our services and better fight spam on WhatsApp,” the company stated. “And by connecting your phone number with Facebook’s systems, Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them. For example, you might see an ad from a company you already work with, rather than one from someone you’ve never heard of.”
Google shared some new numbers related to the “right to be forgotten,” ruling, which has led to individuals requesting URL removals from search results. For all the background on that, peruse our coverage here.
The stats appear on Google’s Transparency Report, where Google now claims to have evaluated for removal 1,234,092 URLs. The total number of requests it has seen dating back to May, 2014 is 348,085.
Here’s the latest look at the sites that are most impacted:
This list, Google says, highlights the domains where it has removed the most URLs from search results. Of the total URLs requested for removal, these sites account for 9%.
How are Google, Facebook, Twitter, and other major tech companies protecting your digital rights?
No so well, according to a new study.
Google scored a 65 out of 100 in the inaugural Ranking Digital rights Corporate Accountability Index, which you might think sounds pretty bad.
But Google’s 65%, a ‘D’ on the grade scale with which you’re likely familiar, is the best of the bunch.
The Ranking Digital Rights initiative’s first Corporate Accountability Index looked at 16 of the world’s most powerful tech companies – eight internet giants and eight telecommunications companies. The index evaluates each company on on 31 separate criteria in three main categories – commitment, freedom of expressions, and privacy.
Questions companies were evaluated upon include:
Does the company commit to provide meaningful notice and documentation to users when it changes its Terms of Service?
Does the company explain the circumstances under which it may restrict or deny users from accessing the service?
If the company restricts content or access, does it disclose how it notifies users?
Does the company disclose what user information it collects, how it collects this information, and why?
Does the company disclose if and how it shares user information with third parties?
In aggregate scoring, Google performed the best with a 65. Yahoo, Microsoft, and Twitter followed that with scores of 58, 56, and 50, respectively. Facebook mustered at 41.
But even the “winners” aren’t really winners.
“When we put the rankings in perspective, it’s clear there are no winners,” said Rebecca MacKinnon, director of Ranking Digital Rights. “Our hope is that the Index will lead to greater corporate transparency, which can empower users to make more informed decisions about how they use technology.”
On the bright side, every company researched was at least doing something, even if that something is not enough.
“All of the companies assessed have at least some practices and/or policies in place that help to protect freedom of expression or privacy,” said Ranking Digital Rights. But “even the companies that ranked highest are missing the mark in some ways, and improvements are needed across the board to demonstrate a greater commitment to users’ freedom of expression and privacy.”
On Friday, Snapchat updated its terms to give itself “worldwide, perpetual, royalty-free, sublicensable, and transferable license to host, store, use, display, reproduce, modify, adapt, edit, publish, create derivative works from, publicly perform, broadcast, distribute, syndicate, promote, exhibit, and publicly display that content in any form and in any and all media or distribution methods (now known or later developed).”
We pointed out that this language is certainly nothing new in the industry, as apps like Facebook and Instagram basically say the same thing in their terms. It’s also not that different from how the terms read before the update.
Snapchat added the word “store” when it comes to your content, and that sent users up in arms.
“First off, we want to be crystal clear: The Snaps and Chats you send your friends remain as private today as they were before the update. Our Privacy Policy continues to say—as it did before—that those messages “are automatically deleted from our servers once we detect that they have been viewed or have expired.” Of course, a recipient can always screenshot or save your Snaps or Chats. But the important point is that Snapchat is not—and never has been—stockpiling your private Snaps or Chats. And because we continue to delete them from our servers as soon as they’re read, we could not—and do not—share them with advertisers or business partners,” said the company in a blog post.
“It’s true that our Terms of Service grant us a broad license to use the content you create—a license that’s common to services like ours. We need that license when it comes to, for example, Snaps submitted to Live Stories, where we have to be able to show those Stories around the world—and even replay them or syndicate them (something we’ve said we could do in previous versions of our Terms and Privacy Policy). But we tried to be clear that the Privacy Policy and your own privacy settings within the app could restrict the scope of that license so that your personal communications continue to remain truly personal.”
If you want something to remain completely, 100% private, don’t send it to another person via a social media app. Snapchat’s terms are pretty much the same as every other company with a spot on your smartphone homescreen.
Facebook tests new features all the time. Many of them die off, some of them succeed. This one needs to die. Quickly.
Some people (a small set of iOS users) are seeing Facebook at its creepiest, and most annoying. Facebook for iOS is suggesting that users post the last link they copied to their clipboard.
No. Stop. Please stop.
Do I really care that the Facebook app knows the last link I copied? No, of course it does. It’s the Facebook app. It knows more about me than my wife.
It’s just an annoying, highly invasive feature.
The majority of the links I copy on my iPhone are stupid GIFs I’m sending to my friends, or dumbass YouTube videos that I’ve stumbled upon. Maybe an article I wanted to talk about with a friend or two. I’m sure this describes many of you, as well.
Look, if I wanted to share an article with Facebook, I’ll do that. Most of the stuff on my iOS clipboard is dumb. I don’t want to share John Cena GIFs on Facebook.
I’m sure I’m not the only one annoyed by this. I’m not going to take the it’s an invasion of my privacy route. You use Facebook. Deal with that.
Snapchat just unveiled a new privacy policy and Terms of Service, and since about 10% of your snaps are dick pics, you’re likely concerned. A lot of people are. Here, look. And here.
Scary? Sure, if you’re not familiar with any other social media terms agreement. Snapchat now gives itself “worldwide, perpetual, royalty-free, sublicensable, and transferable license to host, store, use, display, reproduce, modify, adapt, edit, publish, create derivative works from, publicly perform, broadcast, distribute, syndicate, promote, exhibit, and publicly display that content in any form and in any and all media or distribution methods (now known or later developed).”
As does Instagram, Facebook, and most other social networks and messaging services.
Let’s back up, though. Here’s Snapchat’s new terms of service, and what has people up in arms:
Many of our Services let you create, upload, post, send, receive, and store content. When you do that, you retain whatever ownership rights in that content you had to begin with.
But you grant Snapchat a worldwide, perpetual, royalty-free, sublicensable, and transferable license to host, store, use, display, reproduce, modify, adapt, edit, publish, create derivative works from, publicly perform, broadcast, distribute, syndicate, promote, exhibit, and publicly display that content in any form and in any and all media or distribution methods (now known or later developed). We will use this license for the limited purpose of operating, developing, providing, promoting, and improving the Services; researching and developing new ones; and making content submitted through the Services available to our business partners for syndication, broadcast, distribution, or publication outside the Services. Some Services offer you tools to control who can—and cannot—see your content under this license. For more information about how to tailor who can watch your content, please take a look at our privacy policy and support site.
To the extent it’s necessary, you also grant Snapchat and our business partners the unrestricted, worldwide, perpetual right and license to use your name, likeness, and voice in any and all media and distribution channels (now known or later developed) in connection with any Live Story or other crowd-sourced content you create, upload, post, send, or appear in. This means, among other things, that you will not be entitled to any compensation from Snapchat or our business partners if your name, likeness, or voice is conveyed through the Services.
You retain all ownership rights in your User Content. However, by submitting User Content to Snapchat, you hereby grant us an irrevocable, nonexclusive, worldwide, perpetual, royalty-free, sublicensable, and transferable license to use, reproduce, modify, adapt, edit, publish, create derivative works from, distribute, perform, promote, exhibit, and display such User Content in any and all media or distribution methods, now known or later developed (the “User Content License”), subject to any privacy settings you have set to control who can see your User Content. Without limiting the foregoing, when you submit User Content to Snapchat in connection with Our Stories and other crowd-sourced Stories, you agree that the User Content License accords Snapchat the right to sublicense such User Content to other companies, organizations, or individuals in connection with the syndication, broadcast, distribution, promotion, or publication of Our Stories and other crowd-sourced Stories in any and all media or distribution methods, now known or later developed. No use of User Content, including without limitation, Our Stories and other crowd-sourced Stories, in accordance with the User Content License shall entitle you to any compensation from Snapchat, or any other companies, organizations, or individuals.
So, what changed? For starters, Snapchat added the word “store” when it comes to your content.
Wait, Snapchat is storing your content? Isn’t that the antithesis of what Snapchat is supposed to be?
“When a snap is viewed and the timer runs out, the app notifies our servers, which in turn notify the sender that the snap has been opened. Once we’ve been notified that a snap has been opened by all of its recipients, it is deleted from our servers. If a snap is still unopened after 30 days, it too is deleted from our servers,” Snapchat said in a blog post.
That’s not really the case anymore. Snapchat has its “stories” feature now, and it allows for replays. It’s also working on sponsored content. Data has to be kept for longer.
The new Terms of Service basically extended Snapchat’s license over your content – from just what you post as your Story to your other, more private snaps to friends.
Is this a big deal? I don’t know, is it a big deal to you? Snapchat’s privacy policy isn’t anything new in the industry. But hey, you’re probably not sharing dick pics on Instagram.
Though many bloggers are looking for more eyes on their content, some might be looking for fewer, if they have a tendency to be the target of abusive trolls.
Now, Tumblr is adding what it calls a “simple layer of privacy to let you better control who gets to see your stuff and who doesn’t.” Starting today, you can choose to hide your blog from the web.
“We’ve built a new toggle for you, Tumblr: Now you can choose whether or not your blog is viewable on the web. If you switch it off, your followers will still be able to see your posts in their dashboards (and like them, and reblog them), but anyone who tries to visit your blog at its URL will just get a big fat 404 error.”
That’s right, Tumblr is giving users the option to make their content only viewable within the Tumblr-sphere.
“Pairs nicely with the block feature,” says Tumblr.
Of course, Tumblr is touting this move as a bonus privacy feature, which to some extent it is. But we’re talking about the same Tumblr that doesn’t let users make their main blogs private.
And I’m not sure that taking people to a 404 error page is the best way to go about this. When I see a 404, I think broken, not private. Right?
Over the past week, you may have seen a friend or family member post something like this:
Now it’s official! It has been published in the media. Facebook has just released the entry price: $5.99 to keep the subscription of your status to be set to ‘private.’ If you paste this message on your page, it will be offered free (paste not share) if not tomorrow, all your posts can become public. Even the messages that have been deleted or the photos not allowed. After all, it does not cost anything for a simple copy and paste.
This is crap, to put it bluntly. Facebook will never charge you to use the service. That would be counterproductive. It already makes plenty of money off you. You are the product.
Also, Facebook will never just turn private posts public. You always have full control over who sees your activity.
Don’t believe Facebook? Ok, fine. Believe me. I’m pretty trustworthy, I promise. Facebook is not trying to ruin people’s privacy – people do a fine enough job of that on their own.
But just because this is a hoax, it doesn’t mean privacy isn’t a serious issue when it comes to social media. And it’s shocking how many people are unaware of just how much granular control Facebook gives people when comes to controlling who sees what on the site.
Is Facebook evil? Probably not. The company is making money off you, yes. And it’s tracking everything you do, yes. But despite what some might think, Facebook doesn’t lie to you about privacy controls. They are there, and they’re rather easy to use.
The Activity Log
Did you know that there is a page that contains every single action you ever take on Facebook? It’s called your Activity Log, and it’s easily accesible from the little lock icon on your Facebook homepage.
Inside your Activity Log, you can see everything you’ve ever done on Facebook. Everything you’ve ever posted, commented on, liked, RSVP’ed, or been mentioned in. And here, you have complete control over every single action.
You can change the audience, from public to just friends, for instance. Or from friends to custom.
Facebook’s Custom privacy setting, by the way, allows you to single out specific people to shield from your activity.
Want to post something but you think your mom will hate it? Simply choose to not share it with your mom only. It’s that easy.
There’s really not excuse to bitch about Facebook’s privacy settings when you can control everything from this one hub.
The “View As” Timeline
Also available from the lock icon drop down menu – the “View As” option. This allows you to view your own Timeline as another person sees it.
You can enter in a specific friend’s name and check out what your Timeline looks like to them.
Per-post privacy
And don’t forget, you can always choose your privacy levels for every individual post you make.
If you want true privacy, don’t use social media. But if you’re smart about things, you can easily control your online persona in a matter of clicks.
And you can rest assured that the posts you make more private will stay that way.