WebProNews

Tag: warnings

  • Bing Warns Searchers About Fake Online Pharmacies

    Bing Warns Searchers About Fake Online Pharmacies

    Bing is rolling out a new search feature that warns people who may be about to buy some bad medications.

    Online pharmacies offers customers cheaper, more available drugs – and this can be a dangerous combination. The Food and Drug Administration keeps a list of fake online pharmacies called the Internet Pharmacy Warning Letters, and Bing is using that to warn searchers when they travel to those sites.

    “When there is a significant risk of serious harm to the public from purchasing unsafe, counterfeit and other illegal drugs online, the Bing team wants to help our users make informed decisions. With this goal in mind, we are rolling out a new set of warnings on Bing.com to give our customers more information about the dangers of visiting unsafe online pharmacies,” says Bing.

    Here’s what you will see if you are trying to navigate to one of the sites on the FDA’s list:

    Screen Shot 2015-08-07 at 9.08.10 AM

    “The list of online pharmaceutical sites for which we are providing warnings will grow and evolve. We will continue to refine our efforts in this area and look for more opportunities to help our users make more well-informed decisions as additional, highly-reliable sources of information become available to us,” says Bing.

  • Elon Musk Calls for Research to Make Sure Artificial Intelligence Doesn’t Kill Us All

    Elon Musk Calls for Research to Make Sure Artificial Intelligence Doesn’t Kill Us All

    UPDATED BELOW

    For Tesla and SpaceX CEO Elon Musk, figuring out how to avoid the “potential pitfalls” of artificial intelligence is just as – if not more – important than advancing it.

    Musk, who has been warning us about the possible dangers of AI for some time now, is once again calling for more research into AI safety. Musk has signed and is promoting an open letter from the Future of Life Institute that calls for “research not only on making AI more capable, but also on maximizing the societal benefit … ”

    “The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems,” says the letter.

    “There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

    The Future of Life Institute is a “a volunteer-run research and outreach organization working to mitigate existential risks facing humanity.” The group’s current focus is on “potential risks from the development of human-level artificial intelligence.”

    You may be unfamiliar with this specific interest of Musk’s, but the billionaire has been rather outspoken about it – especially in the last year or so. In June of last year, Musk pretty much admitted to investing in an up-and-coming AI company to keep an eye on them.

    “Yeah. I mean, I don’t think – in the movie Terminator, they didn’t create A.I. to – they didn’t expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. It’s just – you know, but you have to be careful,” he said.

    Soon after, he tweeted that AI was “potentially more dangerous than nukes.”

    Then, a few months later, Musk had this to say as a reply to an article on a futurology site:

    “I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen … ”

    Point being – Elon Musk is pretty concerned about the robot apocalypse, and think you should be too.

    “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do,” says the letter.

    Yeah, not what they want to do. That’s when everything goes to hell in a handbasket.

    UPDATE 1: Musk has just donated $10 million to The Future of Life Institute.

  • Facebook Smartly Adds Warnings to Graphic Videos

    When it comes to dealing with violent and/or potentially offensive content, Facebook has made a lot of missteps. Now, the biggest social network in the world is looking to find a satisfactory medium between a completely hands-off approach and stifling gatekeeping that would (and has in the past) elicited cries of censorship.

    Facebook is beginning to show warnings on top of content flagged as graphic, forcing users to agree to continue before watching or viewing said content. The company is also looking to restrict all such content among its younger user base (13-17).

    The past couple of years have seen Facebook flip and flop around when it comes to how the company wants to deal with graphic content on the site. In 2013, Facebook bowed to public outrage, online petitions, and harsh criticism from family groups and made the decision to ban a graphic beheading video that had been circulating around the site.

    Fast forward a few months, and Facebook was singing a different tune. The company reversed the ban on the video, and in doing so instituted a new policy to govern similar content.

    Soon after, Facebook made an official change to its community standards. Here’s Facebook’s current stance on graphic content:

    Facebook has long been a place where people turn to share their experiences and raise awareness about issues important to them. Sometimes, those experiences and issues involve graphic content that is of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, it is to condemn it. However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site.

    When people share any content, we expect that they will share in a responsible manner. That includes choosing carefully the audience for the content. For graphic videos, people should warn their audience about the nature of the content in the video so that their audience can make an informed choice about whether to watch it.

    But here’s the thing – expecting people to share content in a responsible manner and hoping that they’ll warn people that they’re about to see someone’s head being chopped off is naive at best. Facebook isn’t naive about these sorts of things – not really. That’s why the company laid the groundwork for this latest move way back in 2013.

    “First, when we review content that is reported to us, we will take a more holistic look at the context surrounding a violent image or video, and will remove content that celebrates violence. Second, we will consider whether the person posting the content is sharing it responsibly, such as accompanying the video or image with a warning and sharing it with an age-appropriate audience,” said Facebook at the time. And the company did experiment with warnings for graphic content – but they never went wide.

    Now, it appears they are. A new warning is reportedly appearing for some on top of a video of the death of policeman Ahmed Merabet, who was killed in Paris by a terrorist involved in the Charlie Hebdo attacks.

    “Why am I seeing a warning before I can view a photo of video?” asks a recently-posted question on Facebook’s help page.

    “People come to Facebook to share their experiences and raise awareness about issues that are important to them. To help people share responsibly, we may limit the visibility of photos and videos that contain graphic content. A photo or video containing graphic content may appear with a warning to let people know about the content before they view it, and may only be visible to people older than 18,” says Facebook.

    A Facebook spokesperson told the BBC that “the firm’s engineers were still looking to further improve the scheme” which could “include adding warnings to relevant YouTube videos.”

    Apparently, Facebook was pressured both externally and internally – from its safety advisory board – to do something more to protect users (especially kids) from graphic content.

    Of course, there’s a whole other group of people that Facebook is worried about protecting.

    Video is a-boomin’ on Facebook. Facebook serves, on average, over a billion video views per day – almost one per user – and in the past year, the number of video posts per person has increased 75% globally and 94% in the US. And this is important to advertisers. What’s also important to advertisers? That their smoothie ads aren’t running up against beheading videos.

    Adding warnings to graphic content in a smart move. Not only does it allow Facebook to allow the content on the site and thus dodge the “free speech!” cries, but it lets advertisers feel more safe about advertising on the site. It also puts the onus on users – hey, we told you it was bad but you clicked anyway … your choice!

    Remember, Facebook isn’t a haven for free speech. It never will be. Facebook doesn’t owe you free expression. The company can do whatever it wants and censor as much content as it pleases. Considering that, a little warning before graphic content is better than no content at all, right?

    Image via Mark Zuckerberg, Facebook

  • Elon Musk Once Again Warns of the Looming Robot Apocalypse

    Elon Musk Once Again Warns of the Looming Robot Apocalypse

    Tesla and SpaceX founder Elon Musk has once again taken to a public forum to warn everyone that they shouldn’t sleep on recent developments in the artificial intelligence field. In short, Musk says that the chance of “something seriously dangerous happening” is likely in five years or so, and a near certainty within a decade.

    Musk posted his warning on science and futurology site edge.org, as a reply to an article titled The Myth of AI. At some point, Musk deleted his comment – but quick redditors over at the futurology subreddit caught it.

    Here’s what he had to say:

    The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.

    I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…

    This is by no means Musk’s first warning of the type.

    In August, he tweeted that AI was potentially more dangerous than nukes.

    A few months ago he vocalized his concerns regarding a possible Skynet scenario. In fact, he pretty much admitted to investing in an AI company so that he could keep an eye on them.

    And barely three weeks ago, speaking at MIT’s Aeronautics and Astronautics department’s Centennial Symposium, Musk compared harnessing AI to controlling a demon.

    “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence.

    “Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

    I don’t know if you’re inclined to buy into the plausibility of a robot apocalypse but if you’re going to listen to someone, Elon Musk has to be near the top of the list. Ignore at your own risk.

  • Grapefruit Warning For Medications Expanded

    A review published this week in the Canadian Medical Association Journal (CMAJ) warns that while the number of prescription drugs that have adverse effects from interactions with grapefruit are increasing, many doctors may be unaware of the effects. The review was performed by the same researchers who discovered the adverse interactions over 20 years ago.

    “Many of the drugs that interact with grapefruit are highly prescribed and are essential for the treatment of important or common medical conditions,” said Dr. David Bailey, lead author of the study. “Recently, however, a disturbing trend has been seen. Between 2008 and 2012, the number of medications with the potential to interact with grapefruit and cause serious adverse effects…has increased from 17 to 43, representing an average rate of increase exceeding 6 drugs per year. This increase is a result of the introduction of new chemical entities and formulations.”

    Many of the drugs that interact with grapefruit are also commonly abused, including benzodiazepines, oxycodone, and erectile dysfunction drugs. The side effects of drug interactions with grapefruit can include acute kidney failure, respiratory failure, gastointestinal bleeding, bone marrow suppression in people with compromised immune systems, renal toxicity, and sudden death.

    Grapefruit, seville oranges, limes, and pomelos contain organic compounds called furanocoumarins, which inhibits the CYP3A4 enzyme. CYP3A4 normally metabolizes drugs, inactivating the effects of around 50% of all medication. For drugs in which very little of the actual drug is absorbed in to the bloodstream, the effect is essentially a mega-dose of the medication, which can lead to overdose. Interactions can occur even hours after grapefruit is consumed.

    “Unless health care professionals are aware of the possibility that the adverse event they are seeing might have an origin in the recent addition of grapefruit to the patient’s diet, it is very unlikely that they will investigate it,” said the study’s authors. “In addition, the patient may not volunteer this information. Thus, we contend that there remains a lack of knowledge about this interaction in the general healthcare community.”

    According to the review, people over 45 years old are the prime purchasers of grapefruit, and also the people who receive the most prescriptions for drugs. As a result, older people are most vulnerable to adverse grapefruit interactions.

    “The current trend of increasing numbers of newly marketed grapefruit-affected drugs possessing substantial adverse clinical effects necessitates an understanding of this interaction and the application of this knowledge for the safe and effective use of drugs in general practice,” said the study’s authors.

  • Gaming News Site ‘Gamasutra’ Hacked?

    Go the the Gaming site, Gamasutra and you will get the above warning from Google Chrome. A similar warning will appear on Safari, blocking access to the entire site.

    Gamasutra has Tweeted about the problem and are looking into a solution. They have not commented on the source of the problem, or if the malware threats are real or not. We will update this article as more information becomes available.

    Thanks everyone for bringing the malware warnings to our attention. We are currently looking into the issue. 37 minutes ago via web ·  Reply ·  Retweet ·  Favorite · powered by @socialditto