WebProNews

Tag: reporting

  • Google Analytics Adds User Explorer Reporting (And More)

    Google Analytics Adds User Explorer Reporting (And More)

    Google’s latest release notes for Google Analytics reveal some interesting new features including a new set of reporting called User Explorer Reporting. This lets customers anonymously analyze individual interactions to their site.

    Google explains in the notes, “User Explorer utilizes your existing anonymous Google Analytics data to deliver incremental insights helps marketers obtain valuable insights need to improve and optimize their site.”

    User Explorer can be found in the Audience sections. The report will surface Anonymous Client ID and User ID information including a history of activity. Marketing Land has a good look at the report.

    Also found in the release notes are: deeplinking into AdWords from the AdWords reporting section in GA; [Attribution 360] Data Studio integration; google-analytics.com traffic moved to SSL; [Analyics 360] custom tables: align regex interpretation; flexible auto-tagging override for GA-AdWords linking; AdWords final URL dimension; new sitelinks report in AdWords reporting section in GA; [Analytics 360] Add Experiment Fields to GA’s BigQuery Export; and Google Analytics Reporting API V4.

    Go here for descriptions on all of these item.

    Image via Google

  • Facebook Steps Up Its Suicide Prevention Efforts

    Since 2011, Facebook has had a system in place for reporting suicidal content. Of course, Facebook’s advice if you encounter a direct threat on the site is to contact law enforcement – but you can also alert Facebook to the suicidal posts, which it’ll investigate.

    As it stands, that reporting process is a bit clunky.

    Facebook, in a new initiative with various organizations like Forefront, Now Matters Now, the National Suicide Prevention Lifeline, and Save.org, is improving both the tools it offers for both the reporting of suicidal content and the way it handles it after it’s been reported.

    Now, if you see a friend suggest that they may be considering harming themselves, you can click the “report post” button and be given the option to flag it as suicidal in nature. At that point, you’ll be given the options to private message your friend, contact another Facebook friend to help, or get in touch with a suicide helpline.

    Once you report the post, Facebook will begin to investigate. If it feels as though the person is indeed in distress, Facebook will initiate a new protocol for the next time the person logs in.

    “Hi _____, a friend thinks you might be going through something difficult and asked us to look at your recent post,” says the message.

    Facebook will then ask if the person wants to talk to someone (a hotline), get tips and support, and more.

    “Keeping you safe is our most important responsibility on Facebook,” says the company. “We worked with mental health organizations Forefront, Now Matters Now, the National Suicide Prevention Lifeline, Save.org and others on these updates, in addition to consulting with people who had lived experience with self-injury or suicide. One of the first things these organizations discussed with us was how much connecting with people who care can help those in distress.”

    Facebook says that the new reporting features are now available for about half of US users and will continue to roll out to the rest in the next few months.

    Image via Facebook Safety

  • Instagram Users Sent to Facebook When Reporting Other Users

    Facebook assured Instagram users that their experience with the service wouldn’t change post-acquisition, and that Instagram would continue to “grow independently.” While Facebook isn’t necessarily backtracking on that position, they are beginning to integrate the two services little by little.

    The latest integration comes in the form of user reporting. Now, when Instagram users choose to report a user (for whatever reason), they are directed to a Facebook page to complete the report.

    The Facebook page asks users to use the form to report an Instagram Web Profile and gives the options of spam, nudity, hate speech, and underage user. There are links on the Facebook page the direct users back to Instagram for clarification on types of reporting.

    “Instagram is owned by Facebook, so if you’re logged into Facebook we may use your Facebook account info to help us figure out what’s going on,” reads a message on the report form.

    It shouldn’t be a surprise that Facebook is beginning to integrate parts of Instagram after last year’s big acquisition. But that vague “you’re logged in, os we’re going to use that to find stuff out” message may give some privacy hounds some concern. Especially after the big privacy dustup that saw users enraged at Facebook/Instagram for changing its privacy policies to permit the selling of user photos. Kind of.

    [h/t AllFacebook]

  • Facebook Delves Deep Into The Reporting Process

    With 900+ million people on the site, it’s inevitable that someone is going to post something that offends your sensibilities. And although Facebook has systems in place to weed out content that violates their terms of service, the company has always relied on users to report inappropriate, malicious, or otherwise unsafe content that they run upon during their daily browsing.

    But what happens when you click that “Report” button? Today, Facebook is giving us an inside look at the entire process – and let me tell you, a whole hell of a lot goes into it.

    Here’s what they had to say in a note posted to the Facebook Security page:

    There are dedicated teams throughout Facebook working 24 hours a day, seven days a week to handle the reports made to Facebook. Hundreds of Facebook employees are in offices throughout the world to ensure that a team of Facebookers are handling reports at all times. For instance, when the User Operations team in Menlo Park is finishing up for the day, their counterparts in Hyderabad are just beginning their work keeping our site and users safe. And don’t forget, with users all over the world, Facebook handles reports in over 24 languages. Structuring the teams in this manner allows us to maintain constant coverage of our support queues for all our users, no matter where they are.

    Facebook goes on to explain that the “User Operations” teams are broken up into four separate teams that each have their own specific area of concentration – the Safety team, the Hate and Harassment team, the Access team, and the Abusive Content team. Of course, any content that one of those teams finds to be in violation of Facebook policy will be removed and the user who posed it will get a warning. From there, the User Operations teams can determine if further action is required – like disabling the user from posting certain types of content or in some case, disabling their account altogether.

    Once that goes down, there is also a team that handles appeals.

    The security team adds that they aren’t alone in making sure you have a safe experience on the site:

    And it is not only the people who work at Facebook that focus on keeping our users safe, we also work closely with a wide variety of experts and outside groups. Because, even though we like to think of ourselves as pros at building great social products that let you share with your friends, we partner with outside experts to ensure the best possible support for our users regardless of their issue.

    These partnerships include our Safety Advisory Board that helps advise us on keeping our users safe to the National CyberSecurity Alliance that helps us educate people on keeping their data and accounts secure. Beyond our education and advisory partners we lean on the expertise and resources of over 20 suicide prevention agencies throughout the world including Lifeline (US/Australia), the Samaritans (UK/Hong Kong), Facebook’s Network of Support for LGBT users, and our newest partner AASARA in India to provide assistance to those who reach out for help on our site.

    It’s obvious that the reporting process is a complicated one, and users naturally report plenty of content that they just don’t like, but doesn’t really violate Facebook’s terms. In that case, it’s up the the user to initiate contact with the poster and ask that they take it down. Facebook says that user safety and security is of “paramount importance” around their offices, but in order for it to work, the impetus is on the user to click that “Report” button when they comes across inappropriate content.

    Check out Facebook’s neat little infographic on the process below: