WebProNews

Tag: Ethics

  • Clearview AI Caught Lying About Who Can Use Its Software

    Clearview AI Caught Lying About Who Can Use Its Software

    The hits keep on coming: Clearview AI has been caught lying about who can access its controversial facial recognition software.

    Clearview has amassed a database of billions of photos, scraped from millions of websites, including the biggest social media platforms. The company then makes that database available through its facial recognition software. Since The New York Times broke the story in January, Clearview has faced ongoing criticism from lawmakers and privacy advocates alike who say the company represents a fundamental threat to privacy.

    To make matters worse, Buzzfeed discovered documents proving the company plans to expand internationally, including with authoritarian regimes. Following that, Clearview’s entire client list was stolen, showing its international expansion has already begun.

    Amid the scrutiny and controversy, Clearview has tried to reassure critics that it is responsible in its use of its database. In fact, in a blog post on the company’s site, Clearview says its “search engine is available only for law enforcement agencies and select security professionals to use as an investigative tool.”

    Similarly, the company’s Code of Conduct emphasizes their software is for law enforcement and security professionals, and that they hold themselves to a high standard of ethics, integrity and professionalism.

    There’s just one problem: it’s not true, if the NYT’s report is accurate. According to the report, the NYT “has identified multiple individuals with active access to Clearview’s technology who are not law enforcement officials. And for more than a year before the company became the subject of public scrutiny, the app had been freely used in the wild by the company’s investors, clients and friends.

    “Those with Clearview logins used facial recognition at parties, on dates and at business gatherings, giving demonstrations of its power for fun or using it to identify people whose names they didn’t know or couldn’t recall.”

    This is just the latest example of the irresponsible and unethical way Clearview has conducted business.

  • Google ‘Unrecognizable’ To Company Veterans

    Google ‘Unrecognizable’ To Company Veterans

    Google has undergone a number of major changes over the years, not the least of which is the two founders stepping down from their roles. Many of those changes have caused the company to be virtually “unrecognizable” to many Google veterans, according to CNBC.

    For many workers who spoke with CNBC, 2018 was a pivotal year that showed how much things had changed. Project Dragonfly became public knowledge, exposing Google’s attempt to build a censored search engine for China. In a company that had long treasured a reputation for open communication with its employees, the project had been kept on a need-to-know basis.

    Despite ending the project when employees expressed concern about the ethics of it, for many the damage had already been done.

    “There’s no way a few years before, they would have had a secret project with these kinds of ethical concerns,” Raph Levien, a former level 6 engineer who left Google after 11 years, told CNBC. “It crossed the line and felt misleading. It definitely felt like this was Google changing.”

    Another factor that has hurt the company’s reputation internally is how it has handled sexual abuse allegations, paying executives millions in severance packages despite allegations. The size of the company has also played a role, as it is much harder for a company of “more than 100,000 workers, many of whom are contractors instead of full-time employees,” to maintain the culture it started with.

    One thing is clear, based on CNBC’s report: For a company that is already in the spotlight for privacy issues and antitrust concerns, an internal breakdown of the very culture that made Google what it is, is the last thing the company needs.

  • Google’s DeepMind Starts Ethics Group to Examine AI’s Impact on Society

    Google’s DeepMind Starts Ethics Group to Examine AI’s Impact on Society

    Google is finally taking steps to ensure that its rapid development in the field of AI will only bring about positive change for the whole of humanity. London-based company DeepMind, a subsidiary of Google parent firm Alphabet, has formed a new research unit called “Ethics & Society,” tasked to steer the group’s AI efforts.

    “Our intention is always to promote research that ensures AI works for all,” DeepMind explains in a blog post. Promising to “help technologists put ethics into practice,” DeepMind Ethics & Society group outlined the principles that will guide its future endeavors: social benefit, being rigorous and evidence-based, transparency and diversity.

    The group is comprised of thinkers and experts from a variety of disciplines. They include Nick Bostrom (Oxford University philosopher), Diane Coyle (economist from University of Manchester), Edward W. Felten (computer scientist from Princeton University) and Christiana Figueres (Mission 2020 convener) to name a few, Gizmodo reported. The group lists some of the key issues it will address including AI risk management, setting up standards of AI morality and values as well as lessening the economic disruption AI will likely bring when it replaces real people in the workforce.

    It still remains to be seen just how persuasive DeepMind Ethics & Society will be in terms of imposing its recommendations on Google’s AI ambitions. A clash between the two groups is likely to happen in the future considering that Google’s thrust of churning out potentially profitable AI-powered products may run counter to the Ethics & Society’s goals and principles.

    The rapid development of artificial intelligence is a rather divisive issue even among industry titans. One of the most vocal opponents of unregulated research on AI is Tesla CEO Elon Musk who view artificial intelligence as a potential threat to mankind, calling for a proactive stance in its regulation.

    “AI is the rare case where I think we need to be proactive in regulation instead of reactive,” Musk said earlier this year.” Because I think by the time we are reactive in AI regulation, it’ll be too late. AI is a fundamental risk to the existence of human civilization.”

    [Featured Image via YouTube]

  • In Vitro Fertilization Using Three People Debated in England

    The Human Fertilisation & Embryology Authority in the U.K. has begun consulting the public over whether a new in vitro fertilization (IVF) technique that prevents mitochondrial diseases is ethical.

    The technique, known as mitochondrial replacement, enables women to give birth to children with less risk of passing on a mitochondrial disease. Mitochondrial diseases can sometimes cause muscle weakness, intestinal disorders, heart disease, and shorten life expectancy. According to the HFEA, around 1 in 200 children are born with a form of mitochondrial disease.

    So what’s the catch? The mitochondrial replacement technique uses mitochondria from a donor to replace the mitochondria in a pre-implantation IVF embryo. The embryo is then implanted as normal. Any children born this way will share a small amount of DNA with the donor, meaning he or she would technically have three biological parents, and could eventually pass the donor’s DNA on.

    The HFEA, which is an independent regulator, has been asked by the U.K. government to “seek public views” on whether the technique should be available to couples who risk passing a mitochondrial disease to their child.

    “The Government has asked us to take the public temperature on this important and emotive issue,” said HFEA Chair Lisa Jardine. “The decision about whether mitochondria replacement should be made available to treat patients is not only an issue of great importance to families affected by these terrible diseases, but is also one of enormous public interest. We find ourselves in unchartered territory, balancing the desire to help families have healthy children with the possible impact on the children themselves and wider society.

    “We will use our considerable experience of explaining complicated areas of science and ethics to the public to generate a rich debate that is open to all.”

  • Nokia Starts Ethics Review After Faked Ad

    Some of the momentum that Nokia gained following last week’s announcement of its new Lumia 920 and Lumia 820 smartphones was halted when it was revealed that an ad for its PureView camera technology was faked. The ad featured a “demonstration” of the PureView’s image stabilization technology. However, a reflection in a window gave away that the camera filming a girl on a bycicle was actually part of a full camera crew with high-quality video equipment.

    Nokia did issue an apology immediately after their ruse was discovered, and stated that the ad was only meant to “simulate” the image stabilization that would be possible with the Lumia 920’s camera. To be technical about it, Nokia only apologized for not putting a disclaimer on the ad, but the whole thing was a silly and unneeded screw-up. The company has since released a real demonstration of it’s PureView technology, which is impressive enough on its own.

    Today, Nokia told Boomberg Businessweek that it will have an ethics and compliance officer officer conduct an investigation and prepare an independent report on the incident. Sounds like some heads are going to roll at Nokia.

    It’s all a shame, really, since Nokia’s new smartphones look pretty slick and well-designed. The company should be able to market the phones on their own strengths. Nokia has now fully thrown in with Windows Phone 8, tying its future as a high-end smartphone manufacturer to Microsoft’s new tile-based OS ecosystem. According to Bloomberg, Nokia’s stock is down 45% since the beginning of this year.

  • Texas Instruments Among “World’s Most Ethical Companies”

    Texas Instruments Among “World’s Most Ethical Companies”

    Apple’s new New iPad has been set loose upon the world today, giving consumers to think about something other than Apple’s dubious labor practices. Meanwhile, in the world of companies that have a high standard for the quality of life of employees, Texas Instruments, makers of amazing toys of my youth but now mostly a manufacturer of notable graphing calculators, has been named one of the world’s most ethical companies by the Ethisphere Institute for the sixth year in a row.

    “Ethical behavior and decisions are about how we expect one another to behave in this world — about being competitive and accountable and making sure our values are at the heart of the culture we want to have. This culture has served TI well for more than 80 years and has the ability to be the longest term competitive advantage we have. It’s the way we do business,” said Rich Templeton, Texas Instrument President and CEO. “TI is just as good as all of us behave as individuals.”

    This is the sixth year Ethisphere has published the “World’s Most Ethical Companies” rankings. The Ethisphere Institute reviewed hundreds of companies and evaluated a record number of applications using its propriety methodology through in-depth research and multi-step analysis, naming the companies that surpassed their industry peers to this year’s list. The 2012 list features companies in more than three dozen industries, including 40 companies headquartered outside the United States. The full list will appear in Ethisphere Magazine’s first-quarter issue.

    “We take ethical leadership and corporate citizenship seriously at TI, and we’re honored to be included again on this year’s list,” said David Reid, Texas Instrument Vice President and Director of Ethics. “TI understands that ethical practices not only support a stronger and more solid business overall, but they benefit the community and raise the bar for ethics and integrity within the industry.”

    The full list won’t be available until May so we’ll have to wait till then to find out how less scrupulous tech companies that manufacture our most revered gadgets are busily destroying the lives of their workers.

  • Washington Post Masthead On A Chinese Government Publication

    Freedom of speech — and thus, consequently, freedom to advertise — are fundamental principles of a free democracy and a thriving capitalist democracy, right? That’s what we’re told in this country from a young age. Well it turns out those freedoms are also employed by the Chinese Communist Party. In America. Namely, in The Washington Post.

    This is the source of an ethical controversy that has sprung up recently in the arena of journalism. Each month, the Post runs a paid supplement called China Watch, along with a regularly-updated website of the same name. The “paid” part gets done by the Chinese government. In return, China gets to publish articles produced by China Daily, the house organ of the Chinese government, in the Post, and using its masthead. Articles in China Watch portray China and its government in the way you might expect–that is, positively, or else with a particular diplomatic glibness. Ad copy, some call it. Others call it propaganda.

    It’s a hard boundary to find, that line between advertising and propaganda. People who don’t like being sold to are quick to label all advertising as propaganda of a kind, while free market advocates might suggest that if you pay for it, and if you make it clear that you paid for it, then even a government can simply advertise. The Washington Post says that it makes no attempt to conceal the paid nature of China Watch. Both print editions of the publication and its corresponding website bear a small disclaimer box in their top right corners. But critics of the Post’s partnership with China Daily argue that the disclaimer is not nearly as prominent on the page as the Post’s masthead at the top of the insert. While readers have technically been informed that China Watch has been paid for, critics argue that the prominence of the Post’s masthead makes a bigger statement, confusing readers who might think the Post at least officially endorses China Watch content. The web-edition of the pro-China publication is hosted under the Washington Post domain name. Moreover, the Post neglects to disclose who pays for the ads.

    (image)

    Of course, there’s no law generally requiring companies to disclose details about their advertising partners to the general public. However, things are a bit different when you’re dealing with a representative from a foreign government. The Post’s dealings with China Daily could run afoul of the Foreign Agents Registration Act, which requires that foreign agents and their activities be properly identified to the American public. Such disclosure involves more than a box in the upper-right-hand corner.

    Nor is this the only instance of dealings where The Post has been accused of serving as a mouthpiece for the Chinese government. In an editorial last month, Patrick Pexton, The Post’s own Ombudsman, lambasted the newsroom for at the very best, lazy journalism, and at the worst, kowtowing to the Chinese PR machine. Particularly at issue in the editiorial was the February 13 publication in The Post of an “interview” with Chinese Vice President Xi Jinping. It was later revealed that the “interview” was hardly an interview at all — Post reporters submitted written questions to Jinping, and in return they received a response to questions that had been modified, deleted, and added. Pexton disagreed with the newsroom’s decision to print the reponse:

      So, The Post submits written questions — already a far cry from a live face-to-face unscripted interview with journalists — and the Chinese say, thanks, but we don’t like your questions, so we’ll provide our own questions and answers. Take it or leave it.

      The Post took it. I think it should have left it.

    Of course, Pexton pointed out, this is a complicated issue. While both the printing of the interview propaganda and the lack of transparency regarding China Watch suggest the Post is soft, even misleading, in its coverage of China, The Post also does its fair share of reporting that embarrasses the Chinese government and others. It’s a difficult world to navigate, especially when dealing with China, which often withholds press visas, or grows mum around reporters asking too many uncomfortable questions.

    It’s not just The Post that faces this difficultly. China is sitting on a billion citizens, nuclear weapons, the world’s fastest-growing economy, and $1.2 trillion of U.S. debt. So it has a lot of weight to throw around with governments and major corporations, let alone media outlets. But is it right for The Post to lend its masthead and domain name to China Watch? Pexton observes:

      That’s the thing about China, whether you are The Washington Post, the U.S. government or Apple computers. There is interdependence in the relationship, and constant negotiation and compromise. The Chinese know it, and they take advantage of it.

    Right might not always come into play these days.

    Hat Tip: The Washington Free Beacon

  • Facebook, Teachers & Students: What Not To Do

    Facebook, Teachers & Students: What Not To Do

    If you’re a teacher and you have a Facebook account, which is probably most teachers, you are likely to receive friend requests from your students. Students don’t know anything, which is why they need teachers to educate them, and so they may not really understand why this could be a bad idea. As nice as it would be for teachers to be able to just wish these sorts of murky situations away, that won’t happen. Sorry. Instead, because this is a issue sensitive to many people, it would probably be best to err on the side of caution and just avoid a Facebook relationship with your students altogether. Easy enough to follow through on that one.

    Whether you agree with this path of least resistance and prefer some other course of action so as to amicably resolve the potential problem, there is one thing you should most certainly not do: act shady about being Facebook friends with your students by telling them to keep it on the down-low or, worse, set up fake accounts altogether in order to befriend students.

    A couple of teachers in England apparently missed this policy memo and are now being investigated for maintaining inappropriate relationships with students via Facebook. One teacher who, incredibly, exchanged comments with a former pupil about posing for erotic photos over a webcam received a 12-month suspension. Another teacher received a reprimand for using a decoy account in order to interact with students via Facebook.

    This isn’t exactly breaking news because everybody knows there are creeps on the Internet. That’s not even to say that these teachers are explicitly creeps; they could very well be decent humans who just happened to make some very questionable decisions this time. It happens. It’s happened in the United States, it’s surely happened elsewhere, and it’s a pretty safe bet that it will continue happening in places. But if you’re doing something that makes you self-conscious enough to try to obfuscate your actions, then what you’re doing is more than likely not a good thing.

    In the world of journalism, there’s this thing called a breakfast test. It goes like this: when determining whether the material you’re about to publish is appropriate, you ask yourself, “Would this be too shocking for someone to read while eating breakfast in the morning?” The metric here is that if the material is offensive enough to cause someone to choke on their Cheerios or spit out their bacon, then you probably shouldn’t publish it.

    Similarly, if you’re a teacher, consider how some of your colleagues would pass the breakfast test if they were to discover in the morning news some day that you’re being investigated for how you’ve been corresponding with your students on Facebook. If you think your colleagues might require the Heimlich maneuver upon hearing the news, then you might want to re-evaluate the importance of those Facebook interactions with your students.