WebProNews

Category: DevNews

DevNews

  • Kroger CEO: How We Compete for Software Engineers with Facebook

    Kroger CEO: How We Compete for Software Engineers with Facebook

    Kroger and all retailers are fast becoming tech companies and thus have the difficult task of competing with companies like Facebook for top tech talent. According to Kroger CEO Rodney McMullen, one of their secrets to recruiting software engineers is the promise of more responsibility quicker than anywhere else.

    Rodney McMullen, Kroger Chairman and CEO, reveals how Kroger competes with Facebook and the tech world for software engineers at NRF 2019, Retails Big Show:

    How Kroger Competes for Tech Talent

    In terms of the number of employees, I think you will have the same number but the skillsets will be a lot different. If you look at digital, for example, we have 500 people in our digital team. Within 2-3 years we will have a thousand. With software engineers, it is a completely different type of talent. Yes, we compete with (Facebook). It’s kind of fascinating.

    It’s important for people to eat. It’s important for people to eat things they like. If you come to Kroger you are able to help people get exactly what they want when they want it. You get immediate feedback on something that is incredibly important. If the customer likes it you see it immediately. If they don’t like it you see it immediately. So you get great feedback.

    More Responsibility Quicker Than Anywhere Else

    I always tell people when we are recruiting them, I guarantee you that you will have more responsibility quicker than anywhere else. We have 25-year-old and 30-year-old people running $100 million and $200 million businesses.

    On a couple of tests that we have going on right now, we have two interns that actually did the software work to get it in place. When their internship finished they went back to college and kept working with us to finish the project they worked on. It’s one of those things that you get a tremendous amount of responsibility incredibly fast.

    The Future of Retail

    I think the store will be multi-purpose. I think about one of our bigger stores. It wouldn’t surprise me if you had a small warehouse in the back of that store. You will use the same footprint, but half of it may be a physical store that is an experience space, half of it will be more warehouse efficiency space.


  • BeyondCorp – Google’s New Zero Trust Security Approach Explained

    BeyondCorp – Google’s New Zero Trust Security Approach Explained

    If you are a network person you have probably heard of BeyondCorp, but maybe you have had difficulty explaining it to others in your organization. Fortunately, Google’s Max Saltonstall does it for you in his latest video. Saltonstall says that Google has shifted to a security model without an inside or an outside, where each access request is reevaluated as it is made.

    Max Saltonstall, Google Cloud Developer Advocate, explains what BeyondCorp is in a new video posted by Google Cloud Platform:

    Most Companies Look at Security as a Binary

    Most companies look at security as a binary, with the good folks on the inside and the bad folks kept outside. Security teams install various firewalls and VPN tools to create a strong perimeter. They are always looking for taller thicker walls to respond to the last type of attack or compromise. But this model breaks down as soon as things get more complicated.

    Employees have to work outside. Contractors need access to just one or two internal systems, not all of them. Mobile devices aren’t compatible with your VPN client and attackers are sneaking into your network on previously trusted devices, hiding inside like a Trojan horse. We’ve seen the reinforced perimeter model break down in many ways exposing the highly vulnerable interior.

    We Shifted to a Model Without an Inside or an Outside

    At Google, we shifted to a model without an inside or an outside. We reevaluate the trust of each request as it is made and test to see if we should grant access. All access to company resources gets decided based on the context of the request. Who is it and should they see this thing? What device are they on and do I consider that safe? If the identity plus device plus access policy all check out, then they get in. If not, it’s an express bus to 403 town. Permission denied.

    All access to company resources gets decided based on the context of the request.

    In this model, there’s no trust inherent to any network or location. We don’t care if you’re sitting at home at a coffee shop or at the office, you get exactly the same level of access. It’s easy to start down this path on Google Cloud Platform with Identity-Aware Proxy. All you need is an app that’s using Compute Engine, App Engine or Kubernetes plus Google identities for your employees and you can start securing your apps with identity control.


  • How Lime Monitors Data From 26,000,000 Trips in Real-Time

    How Lime Monitors Data From 26,000,000 Trips in Real-Time

    Lime monitors data from their scooters and bikes in real-time in order to ensure that all bikes are charged and available to riders. In fact, in 2018 Lime monitored 26 million scooter and bike trips worldwide according to their year-end report. The average ride was just over one mile totaling 28 million miles for the year.

    Not only does Lime monitor data from their scooters they also remotely slow them down depending on where they are in a city, all in real-time.

    Xiuming Chen, Engineering Manager of Infrastructure at Lime, discussed how Lime uses AWS and Amazon Kinesis to ingest all of this real-time data:

    Lime Sends Commands to Scooters in Real-Time

    Lime is a company about urban transportation and we provide a green and affordable transportation option for the users. Our scooter is connected to our servers to ensure a higher quality of service. We need to send commands to scooters in almost real-time. Maintaining hundreds of thousands of concurrent connections is a huge engineering challenge and that number is only growing.

    Lime Slows Scooters Down Depending on Geolocation

    We collect all kinds of information between these vehicles. For example, GPS, location, velocity battery level, and motor information. Safety is a top priority for Lime. As an example, we have a feature that requires us to slow down vehicles when they enter certain areas of the city. To achieve that we have had to increase the frequency of data collection drastically. We have a team of data scientists and machine learning engineers. The team analyzes this data to help us understand how people use our service.

    On-Demand “Juicers” Use App to Collect Scooters

    Normally we see spikes from 11 a.m. to 4 p.m., depending on where you are, but sometimes we also notice a very interesting spike at 9:00 p.m. So we have a network of on-demand workers that charge our scooters. We call them Juicers. Every night at 9:00 p.m. we start to allow Juicers to collect scooters. All of them open the app at the same time which causes the traffic to flatten our servers and causes a traffic spike. In the past, the traffic came directly into a relational database and our service become slower and unusable.

    Amazon Kinesis Ingests Real-Times Scooter Data

    We started to use Amazon Kinesis to ingest real-time data coming from our vehicles. The speed of the growth of this industry is incredible. Scalability is one critical issue we have to deal with and we can let Kinesis do the heavy lifting behind the scene. We can spend the time to work on some more important features that users really need.


  • SAP Massively Going for Expansion Into Multi-Cloud World, Says CTO

    SAP Massively Going for Expansion Into Multi-Cloud World, Says CTO

    “We’re massively going for the expansion into this multi-cloud world,” says Björn Goerke, SAP CTO & President of the SAP Cloud Platform. “We strongly believe that the world will remain hybrid for a number of years and we’re going in that same direction with the SAP Cloud Platform.”

    Björn Goerke, SAP CTO & President SAP Cloud Platform, recently discussed the future of the SAP Cloud Platform in an interview with Ray Wang, the Founder & Chairman of Constellation Research:

    Massively Going for Expansion Into Multi-Cloud World

    We’re massively going for the expansion into this multi-cloud world. We strongly believe that hybrid clouds will play a major role in the coming years. If you also follow what the hyper scalars are doing, Amazon was the last one to announce an on-premises hybrid support model. We strongly believe that the world will remain hybrid for a number of years and we’re going in that same direction with the SAP Cloud Platform.

    We announced partnerships with IBM and ANSYS already and there will be more coming. We’re totally committed to the multi-cloud strategy driving the kind of choice for customers that they demand. But then what we’re more and more focusing on is business services and business capabilities. It’s about micro services as well. It’s really about business functionality that customers expect from SAP. We are an enterprise solutions company.

    It’s Really About No Code and Low Code Environments

    With our broad spectrum of 25 industries we support all the lines of business within a corporation from core finance to HR to procurement, you name it. We are focused on a high level of functionality that we can expose via APIs and micro services on a cloud platform to allow customers to quickly reassemble and orchestrate customer specific differentiating solutions.

    There is no other company out there in the market that has the opportunity to really deliver that on a broad scale worldwide to our corporate customers.

    That’s where we’re heading and that’s where we’re investing. We’re working on simplifying the consumption of all of this. It’s really about no code and low code environments. You need to be able to plug and play and not always force people to really go down into the trenches and start heavy coding.

    SAP Embedding Machine Learning Into Applications

    Beyond that machine learning is all over and on everybody’s mind. What we’re doing is making sure that we can embed machine learning capabilities deep into the application solutions. It can’t be that every customer needs to hire dozens and even hundreds of data scientists to figure these things out.

    The very unique opportunity that SAP has is to take our knowledge in business processes, take the large data sets we have with our customers, and bring machine learning right into the application for customers to consume out of the box.

    RPA is a big topic as well of course. We believe that 50 percent of ERP processes you can potentially automate to the largest part within the next few years. We are heavily investing in those areas as well.

    Focused on Security, Data Protection, and Privacy

    Especially if you think about the level of connectivity and companies opening up their corporate environments more and more, clouds being on everybody’s mind, and the whole idea to make access to information processes available to everybody in the company and in the larger ecosystem at any point in time from anywhere, of course, that raises the bar that security has to deliver. So it’s a top of mind topic for everybody.

    There are a lot of new challenges also from an architectural perspective with how these things are built and how you communicate, We have a long-standing history as an enterprise solution provider to know exactly what’s going on there. There’s security, there are data protection and privacy that companies have to comply with these days. I think we’re well positioned to serve our customers needs there.

    https://youtu.be/JwXU89MrdaA


  • BrainQ Developed Unique AI-Powered Brain-Computer Medical Device, Says CEO

    BrainQ Developed Unique AI-Powered Brain-Computer Medical Device, Says CEO

    Working with the Google Developers Launchpad, BrainQ has developed a unique AI-powered brain-computer medical device, says their CEO, Yotam Drechsler. “It takes patients’ brainwaves as an input with a set of metadata and runs machine learning algorithms in the cloud and translates them into a tailored electromagnetic treatment aimed at facilitating their central nerve system recovery process,” says Drechsler.

    Yotam Drechsler, CEO of BrainQ, discussed the companies unique AI-powered technology in a video for Google Developers Launchpad:

    AI-Based Medical Device to Treat Neural Disorders

    BrainQ is developing an AI-based medical device aimed at getting powerless people following neural disorders, like stroke or spinal cord injury, back on their feet. Every single year, hundreds of millions of people around the world suffer from neural disorders. Stroke alone accounts for 15 million people every single year. And the entire neural disorders cost to the US economy is $1.5 trillion every single year.

    My grandfather had a stroke several years ago. From being the center of the family, all of a sudden, he became paralyzed in half of his body. That means he can longer do simple things like grabbing a glass of water or dressing alone. That’s the reality for many people out there.

    Using AI to Model Physical Therapy

    The common treatment is what’s called physical therapy. It’s essentially exercising the hand or the leg back and forth. What BrainQ essentially does is modeling physical therapy and applying it directly to the brain. In a sense, we ask what happens for a patient or for a healthy person when he does a hand movement, like reaching his hand to grab a glass of water.

    We are getting a lot of people to do these kinds of movements and then we learn the patterns. We take these patterns that we have learned and identified and reapply it back to him as a personalized treatment.

    Developed Unique Brain-Computer Medical Device

    We have developed a unique brain-computer interface-based medical device. It takes patients’ brainwaves as an input with a set of metadata and runs machine learning algorithms in the cloud and translates them into a tailored electromagnetic treatment aimed at facilitating their central nerve system recovery process.

    We were very fortunate to have Google share this vision with us. We worked very closely with the GCP team on making this vision come true. We were fortunate to be on this program, and it really puts us on a fast track. And in all four fronts, we have developed the next generation of technology with precision medicine base, with the studio team, Peter Norvig, and the rest of the Googlers that were very, very keen in helping us.

    We had a large funding round in the past couple of months and we have several collaborations in the pipeline. We are hoping to continue on this promising track and really bring cure to millions of people around the world. And we are fortunate to have Google with us on this journey.

  • Oculus Exec Yelena Rachitzky Talks About How VR Can Move Beyond Gaming

    Oculus Exec Yelena Rachitzky Talks About How VR Can Move Beyond Gaming

    Most virtual reality products are aimed at gamers because there is an automatically understood natural fit. Can VR move beyond gaming? Oculus executive produce of experiences at Oculus offers her insights.

    Yelena Rachitsky, Executive Producer, Experiences at Oculus, a virtual reality technology company owned by Facebook, was recently interviewed by TechCrunch writer Lucus Matney:

    It’s Not Just About Content, Technology is Making it Easier

    We’re focusing a lot more on more highly interactive content and marrying concepts that were understanding from gaming into more narrative approaches. Instead of shooters and strategy, how do we use these mechanics of understanding on how our body works, natural intuitive mechanics to create pieces that people actually want to come back to, pieces people actually enjoy and don’t feel like they are playing a game necessarily.

    So we’re marrying that knowledge also with the form factors, I think a few people have mentioned Quest which is something we’re super excited about, so it’s not just the content it’s also the technology that’s coming and making it easier.

    Technology is Also Working to Make Things More Intuitive

    A lot of technology is also working just to make things much more intuitive. It’s a combination of how we’re approaching content being more compelling, more intuitive, more interactive, more emotional, with the form factors in the hardware. The thing I’m really interested in is how we approach experiences that have very more natural intuitive interactions versus a lot of button pressing.

    I gave this talk at Oculus Connect recently about embodiment and what makes us feel like something’s ours when they connect with an object and there’s this reality, our Facebook Reality Labs research talks about something called object believability, and we really believe that we’re picking up an object if it’s something that we recognize that we’ve done in the real world.

    The Hard Part of VR is That We Are Holding Controllers

    The hard part about VR is that we’re actually holding controllers in our hands. So how do you make your brain believe that you’re actually picking up those objects? People have approached this in different ways. With  Job Simulator (by Oculus) you have big hands that you press with really really big buttons. There’s something very rewarding about that. Then there’s a game that the studios’ team did called Lone Echo which they put a lot of effort into how the hands formed themselves around objects because if you’re seeing your hands actually shift in the way that they should in real life your brain believes that and it becomes super rewarding.

    With a lot of the projects we’re creating we’re still experimenting, we still don’t know a lot of this stuff, but we’re going all the way from fully interactive to still slightly linear. There’s not a magic formula to it, everything’s just about the intent that you want to create and then all the tools that you use for VR that push forward that intent.

  • How to Write Code 10-20 Times Faster

    How to Write Code 10-20 Times Faster

    Writing code is at the heart of software and it’s what makes applications like Google, Facebook, and Tinder work. The problem for startups and large enterprises alike is that writing code is a tremendous undertaking, costs a lot of money and takes a lot of time. For years, companies have been trying to create code-writing software with limited success.

    Appian CEO and founder Matt Calkins says that their code writing platform is a generational improvement that will literally, according to an independent study, enable developers to write code 10-20 times faster because it takes the code writing out of code writing.

    Matt Calkins explains how their revolutionary code writing platform works in an interview this morning on Squawk Box:

    We Can Write Code 10-20 Times Faster

    These days companies compete based on the software they write, that’s how they impress us. As customers, every company has to be more efficient but also more appealing. It used to be just back office and now it’s front office too, now it’s appealing to everybody and differentiating from their competition.

    Every company’s got to write a lot of software and they need a faster way. We can write code 10-20 times faster because instead of writing it by lines you draw it like a picture. It’s a flowchart with boxes and arrows and you depict your intentions in the software and then we translate that into code. It’s an interpretation layer, you could think of it as a code already existing but it’s not blocks of code because that wouldn’t be smart.

    Built-In Artificial Intelligence

    The is absolutely AI in there, but AI is not the translation it’s just an augmentation. AI isn’t smart enough yet to actually write your code. The important thing here is there’s never an authoritative layer that you can go down to and modify. We’re interpreting your instructions, so you express what you want in software and then we interpret that on every mobile device and on any cloud in a scalable way.

    You write something in Appian and it’s going to end up being in a native app on your phone. It’s going to be a native app on every phone and every device. Appian is going to translate your intentions that you expressed in a flowchart in drag and drop ways into a piece of software that’ll run on your phone.

    What’s Missing in Code Writing is Simplicity

    Over the years there have been a lot of attempts to create an easy way to write software and this is the latest generation and I think it’s gone a lot further than the other generations. You can tell that because we’re being run by the biggest organizations in mission-critical ways, unlike these old attempts that kept it simple and weren’t powerful enough to do the top job. We’ve got some of the top firms in almost every industry around the world running multi-million dollar applications on the Appian platform which shows that it’s really industrial strength stuff.

    My mission is simplicity, with the organization too, everything about creating code should be simpler in order to allow people to develop more of it and change it faster. I think that’s what’s really missing in code.

  • How IBM Watson AI Technology Was Used at the US Open

    How IBM Watson AI Technology Was Used at the US Open

    At the Streaming Media East 2018 conference, David Clevinger, Senior Director, Product Management & Strategy, IBM Cloud Video, discussed how Watson’s AI technology was used at the recently concluded US Open Tennis Championship:

    The typical use case that we’ve been seeing is media entities that have large back catalogs of content that was originally created when they didn’t have complex metadata toolsets, didn’t have necessarily the right people applying metadata, didn’t think of all the use cases on the output side, such as historical content.

    Teaching Watson About Tennis

    A very concrete example is work that we’ve done for the US Open. We actually took hundreds of thousands of video clips and photos and news articles and vocabulary terms and proper names and fed it to Watson and helped Watson to understand what tennis was about. This was so that it could do things like when you heard the word “ash” it was capital ASHE, Arthur Ashe, as opposed to lowercase. There was a lot of training around that.

    The output then became our ability to create clips based on what was happening within an event but also to describe historical video as well. That’s critical for companies with large media back catalogs who then need to optimize that before. You can apply it to live, of course, but that’s a typical use case that we see.

    It’s a Recursive Learning System

    It’s a recursive learning system where we took a cross-section of a set of video assets, described it to Watson, said this is what’s going on, this is who this player is, and this is what is being said. We were then able to turn it loose really on other unstructured assets, have it say what it thought it was finding, and then we were able to correct it.

    We were able to basically train it up to understand tennis specifically.

    Teaching Watson to Score Excitement

    Then the output was we could then turn it loose on a bunch of different kinds of outputs for the client. The outputs are closed captioning, video clips, and excitement scoring. We were able to do things like listen for crowd noise and then say this must be really exciting because the crowd is making a lot of noise at this moment, so we were able to turn that into an excitement score.

    We wouldn’t be able to do that if we didn’t really help the algorithm understand what it was looking at and how it should be thinking about that body of work. Then we just turned it loose and let it go.

    That’s the idea, to get it to the point where you can just turn it loose and let it run.

  • HipChat Maker Atlassian Calls It Quits, Sells to Rival Slack

    HipChat Maker Atlassian Calls It Quits, Sells to Rival Slack

    The saying “If you can’t beat them, join them” certainly holds true for Atlassian and Slack. The former is selling its rights to HipChat and Stride to rival Slack and will even be making a small investment in the company.

    The surprising news was announced recently by Slack CEO Stewart Butterfield. Aside from tweeting his company’s purchase of the two products, he also explained that the move was to “better support those users who choose to migrate” to Slack. Joff Redfern, Atlassian’s VP of Product Management, also confirmed the news. In a blog post, he said this was the “best way forward for our customers and for Atlassian.”

    What was not as surprising was the revelation that the company would be shutting down both HipChat and Stride. The former is one of Slack’s main competitor in the workplace chat arena while the latter is a chat and collaboration system that Atlassian rolled out in 2017.

    Atlassian clarified that they only sold the intellectual rights to HipChat and Stride and that Slack will not be handling support for the two products. However, existing HipChat Server and HipChat Data Server customers will still enjoy product support until their license period ends. The two products will be discontinued on February 15, 2019.

    Slack and Atlassian will also be working together to migrate all of the enterprise giant’s users over to Slack. The two companies will also be collaborating in developing future integrations. Atlassian will also be receiving a small stake in Slack, with the startup paying an undisclosed amount to the company in the next three years.

    Atlassian tried hard to remain competitive in the office chat space environment by moving its HipChat users to Stride. Aside from the usual chat and communication features, Stride also offered project-tracking and audio and video conferencing. However, the revamped system just wasn’t enough to bring in new users and the company started to consider selling.

    Atlassian co-CEO Mike Cannon-Brookes told Bloomberg that they’re proud of what their team has built, but also admitted that “it is a crowded space, and there’s a pragmatic option there.”

    The alliance between the two rivals makes sense, especially with Microsoft chipping away at Slack’s dominance in corporate chat software. Microsoft has put the pressure on with its Teams software, which is now available to its 135 million Office cloud subscribers. It has also released a free version of Teams to attract new users. At the moment, Slack reportedly has 500,000 live organizations using its system while Microsoft says 200,000 active organizations are using Teams.

  • Facebook Improves Admin Tools for Groups, Introduces Enterprise Collaboration

    Facebook Improves Admin Tools for Groups, Introduces Enterprise Collaboration

    Facebook has launched several updates for its Groups to help admins manage them efficiently and keep communities safe. The rollout of new tools, controls, and additional features are in line with the company’s focus on creating engagement in various communities on the site.

    With more than a billion members across millions of active groups, Facebook is putting in an effort to help community managers handle nearly every activity each day. That’s why admins will now have a dedicated customer support service to handle queries and reported issues. And with more people on board, Facebook intends to give quick feedback as well. For now, the free service is only available to selected group admins on iOS and Android in English and Spanish but will continue its rollout in the coming weeks.  

    Another tool that will benefit group admins is the launching of an online educational resource. The live site contains short tutorials, product demos, and actual case studies drawn from the experience of fellow admins. Done in audio and video formats, content on the learning portal aims to give a better understanding of how Facebook and Groups work.

    As Facebook promises to build resources according to its users’ needs, the company has introduced two admin tools. One new feature will allow community admins and moderators to inform members of their rule violations that merited removal of the post. Admins and moderators can even add comments in the activity log when a post is taken down.

    Another update is allowing admins and moderators to choose certain Facebook users, otherwise called pre-approved members. Whenever they post, their content will no longer require approval since they are tagged as trusted members. This means less moderation of content for managers and more time in connecting with others.

    Apart from creating communities, Facebook wants to bring social networking to the workplace as well. Called Workplace by Facebook, the collaboration tool is one of the many available in the market now. It faces stiff competition from Slack, Atlassian’s Stride, and Microsoft’s Team, but none of them have a userbase that comes close to Facebook’s over two billion.

    Facebook is banking on its partnership with identity management developer Okta to bring in more business accounts and convince larger companies that Workplace is an enterprise app. With the proposed integration, employees can securely sign in Okta and gain easy access to Workplace and other cloud apps.   

  • Publishers Express Discontent Over Google’s GDPR Plan

    Publishers Express Discontent Over Google’s GDPR Plan

    A group of international publishers is dissatisfied over Google’s compliance strategy for the General Data Protection Regulation (GDPR) privacy rules. Set to take effect on May 25, the rules require companies to gain explicit consent for personal data collection and use for ad targeting.

    The trade groups, namely, Digital Content Next, European Publishers Council, News Media Alliance, and News Media Association, published an open letter addressed to Google CEO Sundar Pichai on April 30. In it, they criticized the tech giant for passing on an unreasonable burden to them in exchange for continued access to its advertising services.

    Google outlined its consent plan on its AdWords blog in late March. However, the publishers protested that the plan was revealed too late and encumbered them with the bulk of the compliance responsibilities. As publishers using Google ad services, they have to obtain consent directly from EU users. They expressed their discontent in the open letter:

    “As the major provider of digital advertising services to publishers, we find it especially troubling that you would wait until the last minute before the GDPR comes into force to announce these terms, as publishers have now little time to assess the legality or fairness of your proposal and how best to consider its impact on their own GDPR compliance plans, which have been underway for a long time.” 

    Under the new privacy framework, there are stricter consent requirements for processing personal data collected from EU users. It protects the rights of EU citizens regarding how their data can be used. The law will also impose hefty fines and significant legal liabilities for noncompliance or mishandling of user data, which will likely fall on the publishers’ shoulders.

    However, the groups pointed out that Google’s singular approach in ensuring compliance from its publishers and advertisers is inaccurate. They added that it only protects Google’s existing business model, given its dominance in online advertising.

    According to the group, Google wants to identify itself as a data controller and asks publishers to share their gathered data. For its other ad services like Google Analytics, the company considers itself a data processor but with extensive rights over gathered information.

    The publishers underscored the lack of transparency under the compliance plan. They are wary of Google’s reluctance to provide specific information about its planned use of data, a must in obtaining legal consent under GDPR.

    But Google pointed out that it will only use the data for testing algorithms, enhancing user experiences, and improving the accuracy of its ad forecasting system.  Google clarified in a statement:

    “Because we make decisions on data processing to help publishers optimize ad revenue, we will operate as a controller across our publisher products in line with GDPR requirements, but this designation does not give us any additional rights to their data.”

    The tech giant also added that the draft on guidance consent was released in December and continues to be revised, prompting Google to put out the new ad policy only this year.

  • Google’s Speech-to-Text API Gets a Big Overhaul That Could Benefit Businesses

    Google’s Speech-to-Text API Gets a Big Overhaul That Could Benefit Businesses

    Google has disclosed plans to improve its Cloud Speech-to-Text API using the same speech recognition technology that’s being utilized to power Google Assistant and Search. The updated API (formerly known as Cloud Speech API)  is predicted to enhance its voice recognition performance and reduce transcription mistakes by as much as 54 percent.

    The announcement came last week via Google’s blog post. The offered changes would let developers allow Internet of Things (IoT) devices to reply to users, power voice response systems for call centers, and transform text-based media into speech.

    The updates being made to the API by the Mountain View-based company could be a sign that the company is more determined to bring AI-powered tools to its systems.

    Big Improvements Made to the API

    Google has made a number of key improvements to the API. One of the main updates is how developers can choose between several machine learning models. There are currently four speech recognition models to choose from. These include models for short inquiries and voice commands, understanding audio from phone calls,  understanding audio from video and the default model.

    The company has also included a video model in the update. Apparently, this model has been upgraded to process sound from videos and/or audio from several speakers. The model utilizes machine learning that’s similar to what’s used in YouTube captioning. The new video model will reportedly reduce errors by as much as 64 percent.

    There’s also the “enhanced phone_call” model, one of the first opt-in programs for logging data. The model will utilize callers’ data to improve the system. Those who choose to participate in the program will receive access to the model.

    The updated Speech-to-Text API also rolled out a punctuation model. Google has admitted that transcriptions from the previous model were atrocious because of the unusual punctuation (or the lack of it). To be fair, punctuating transcribed audio is very challenging, but Google says that its new Speech-to-Text model will show transcriptions that are easier to read and understand. It will reportedly have fewer run-on sentences and more commas, question marks, and periods.

    The improved punctuation is due to a new LTM neural network which automatically suggests the punctuation marks to use in the text. The feature can be a big help when taking notes by voice or when doing conference calls.

    The Speech-to-Text update allows developers to tag the transcribed video or audio with optional recognition metadata. While it’s not clear how this will benefit developers at the moment, Google insists it will utilize the data it accumulated from users to determine what features it will prioritize.

    Google’s Speech-to-Text Geared Towards Business

    The improvements Google made to its Speech-to-Text API seems to indicate that the company wants to attract more business users. Its new phone call and video transcription models appear to be particularly geared for tasks like those carried out by call centers. The API can now support two to four speakers and takes into consideration background noises like hold music and line static.

    The API can also be used to transcribe video broadcasts of sporting events like basketball. Sports broadcasts involve the use of multiple speakers, like the hosts, player interviewers, advertisements, plus the cheers of the crowd, sound effects and other noises attributed to the game.

    In a blog post, Dan Aharon, Google’s Cloud AI product manager, pointed out that use of the Speech-to-Text API has been steadily increasing. He also pointed out that “Access to quality speech transcription technology opens up a world of possibilities for companies that want to connect with and learn from their users.”

    Google’s video and “enhanced phone_call” modes are now available for English transcription. Additional languages will soon be available as well.

    Audio transcripts will still cost about $0.006 per 15 seconds while the video model will cost around $0.012 per 15 seconds. However, companies can avail of a free trial period for the video model at $0.006 per 15 seconds that will run through May 31.

    [Featured image via Pixabay, article graphics via Google Cloud blog]

  • Google Cloud Introduces VPC Flow Logs, Allows Users to Collect Network Telemetry at Various Levels

    Google Cloud Introduces VPC Flow Logs, Allows Users to Collect Network Telemetry at Various Levels

    Last Thursday, Google introduced a new feature to its Virtual Private Cloud (VPC) users for tracking network operations between their servers in the Google Cloud. Called VPC Flow Logs, the tool logs and monitors all network flows sent from and received by the virtual machines (VM) inside a VPC in five-second intervals.

    The new feature is set to improve monitoring by Google Cloud Platform (GCP) admins and increase transparency in the VPC network, including traffic between Google Cloud regions. It is similar to Cisco’s NetFlow “but with additional features,” as explained in the company’s blog post.  

    According to Google,“It also allows you to collect network telemetry at various levels. You can choose to collect telemetry for a particular VPC network or subnet or drill down further to monitor a specific VM Instance or virtual interface.” 

    Aside from capturing telemetry data at each level, VPC Flow Logs can also track internal VPC traffic, flows between a VPC and on-premise deployments, flows between servers and any Internet endpoint, and exchange between servers and Google services.

    Users can then export the collected data to Stackdriver Logging or BigQuery if they opt to keep it on the Google Cloud server. They can also use Cloud Pub/Sub in exporting the logs to other real-time analytics or security platforms. Moreover, VPC Flow Logs has integrated with two leading logging and analytics platforms, Cisco Stealthwatch and Sumo Logic. The data updates every five seconds without any effect on the performance of deployed applications.

    VPC Flow Logs lets network operators gain more insight about the network, as well as debug and troubleshoot app-related issues. The tool allows them to optimize network usage with more available information about global traffic. It also allows GCP admins to perform network forensics in investigating suspicious behavior, such as traffic from unusual sources or substantial volumes of data migration.

    [Featured image via Google Cloud website]

  • Microsoft is Now Offering AI Certification Courses, Job-Ready Skills and Real-World Experience Included

    Microsoft is Now Offering AI Certification Courses, Job-Ready Skills and Real-World Experience Included

    Professionals wanting to polish their skills or add machine learning to their resume can now do so via the Microsoft Professional Program. Recognizing the need for companies to train their employees in the latest AI trends, the software giant is now offering a series of courses to the public that comes with “a digitally shareable, résumé-worthy credential.”

    The new program is called the Microsoft Professional Program for AI where, as promised by Microsoft, participants will get “job-ready skills and real-world experience.” The seminar is targeted to engineers who want to improve their data science and artificial intelligence skills.

    The online courses will be under the guidance of expert instructors as well as hands-on labs. The AI program consists of nine skills where each skill is estimated to take between eight and 16 hours for participants to complete. There is also a final project that each student must complete to pass the course.

    The program emphasizes hands-on learning where students are taught “how to work with data to build and train machine learning models that power interactive bots.” In addition, the series covers a variety of topics that are relevant in today’s workplace such as ethics in AI, using Python as the programming language for manipulating data and different types of machine learning models and how to create them.

    However, participants do not have to complete each segment in one go. Students can opt to complete each module within three months while the final project has a six-week deadline. Each segment or course is only offered four times in one year.

    Enrollees earn credit for finishing a course or segment. Should they require it, they can get Verified Certificates from edX.org.

    AI skills are becoming increasingly useful in today’s workplace. As explained by Microsoft  Research AI assistant director:

    “AI is increasingly important in how our products and services are designed and delivered and that is true for our customers as well. Fundamentally, we are all interested in developing talent that is able to build, understand and design systems that have AI as a central component.”

    For employees, getting AI certified is time well spent. With salaries for AI professionals going through the roof it should be considered a worthwhile investment in their future earning potential.

    [Featured image via Microsoft]

  • Google’s Mobile-First Search Indexing Goes Live

    Google’s Mobile-First Search Indexing Goes Live

    Google has announced the rollout of its mobile-first search indexing, after more than a year of testing and experimentation. The move was first detailed in 2016 when Google wanted to use phone-optimized versions of websites to index pages in its search results.

    The shift to mobile-first indexing comes from the rising trend of more people using mobile devices to browse and search the web. However, some sites have significantly different versions of content for desktop and mobile browsers, the latter often a watered-down copy of the former. “Mobile-first indexing means that we’ll use the mobile version of the page for indexing and ranking, to better help our – primarily mobile – users find what they’re looking for,” Google explained its blog.

    Google insisted that it will only use one index in displaying search results, but will prioritize mobile-friendly sites over desktop versions. It emphasized that the index only changed how content is gathered and not how it is ranked.

    Google also allayed fears that desktop content will be removed from the index, or that mobile sites not included in the initial wave would be at a disadvantage when compared to first joiners. And if a desktop site is more relevant to the search over mobile alternatives, it will be included in the results.

    The company will select sites that follow best practices for mobile-first indexing, notifying them via Search Console. Webmasters of these sites should notice increased visits from the Smartphone Googlebot. After the shift, the mobile version of sites will be shown in Google’s search results and cached pages.

    Google assured that it will continuously evaluate content in its index to determine how mobile-friendly sites are based on best practices. Moreover, it will still prefer mobile versions of sites over Google’s fast-loading AMP pages in indexing.

    The tech company has always pushed for mobile-optimized sites, boosting the rank of mobile-friendly pages on its search results in 2015. Last January, Google announced that page loading speed will also be a ranking factor for mobile searches and slow pages will be downranked starting July 2018.

    [Featured image via Pixabay]

  • Mozilla Launches Firefox Quantum for Enterprise, IT Professionals Can Now Try Beta Version

    Mozilla Launches Firefox Quantum for Enterprise, IT Professionals Can Now Try Beta Version

    Firefox Quantum for Enterprise has now entered the Beta stage. This is the final step before Mozilla can officially release what is being touted as the latest and best version of the popular open-source browser, but with a few improvements specifically for business users.

    This beta version of Mozilla’s browser is expected to meet the increasing demand for enterprise workflows utilized by companies that have started to move away from conventional applications in lieu of cloud applications.

    Firefox Quantum has been designed particularly for IT professionals. However, home users shouldn’t worry that they’re missing out. The new browser will function more or less like the standard one. However, what sets Quantum apart is how easily it can be configured and dispensed across a business’ IT infrastructure.

    Quantum comes integrated with controls that allow administrators to send out pre-configured versions of Firefox. This means IT administrators can disable any features that could cause a security breach. They can even configure a default proxy or set-up Quantum with a select array of bookmarks and add-ons.

    Mozilla’s enterprise-geared browser is powered by its new engine. It utilizes an algorithm written in Rust, the company’s own system programming language, which enables it to run in parallel across several CPU cores, thereby boosting its performance and speed. This also allows Quantum to run different web apps simultaneously and still have enough RAM to continue running traditional apps like Word.

    Even though Mozilla has introduced all these changes and upgrades, users are still assured that their privacy remains sacrosanct. The company emphasized this core principle again in its press release, which stated that “Firefox does not track user activity to target advertising as other browsers do.”

    To that end, users and administrators can turn on Tracking Protection to disable the invisible scripts that follow users as they move through different websites. Turning this feature off also makes the browser run faster, even cutting loading time by half in select sites.

    Firefox Quantum for Enterprise can now be downloaded. IT professionals who want to experience the browser’s new features should join its beta run.

  • Trump Administration Contemplates State-Run 5G Network

    Trump Administration Contemplates State-Run 5G Network

    The Trump administration is said to be planning on developing a secure 5G network that could be placed under federal control. The idea, which reportedly came about due to concerns about competitions and cybersecurity threats from China, was immediately met with backlash from the FCC and the wireless industry.

    Axios reported over the weekend that National Security Council officials released a memo stating the United States requires a centralized 5G network system in the next three years. The memo further outlined that the best choice would be for the government to finance and build the infrastructure before renting to telecommunication companies like AT&T, T-Mobile, and Verizon.

    Officials from the White House have told Axios and Recode that the memo Axios reported on was an old and out of date one. However, two anonymous administration officials claimed that discussions about the proposed 5G network were still in the early stages.

    The current administration is known for being concerned about the security and economic threats posed by superpower China. The Asian giant has been aggressive in its development of 5G and it seems the Trump government is wary that China might spy on American citizens and businesses.

    The idea of an administration controlling the country’s next-generation wireless system is unheard of, and the pushback from the Federal Communications Commission (FCC) was equally surprising, considering its chairman was an appointee of the president.

    FCC Chairman Ajit Pai quickly issued a statement opposing the “proposal for the federal government to build and operate a nationwide 5G network.” He further described the proposed endeavor as “a costly and counterproductive distraction from the policies we need to help the United States win the 5G future.”

    Pai also suggested that the government should instead “push spectrum” into the marketplace and put up regulations that would encourage private companies to develop and implement the next-gen system.

    A group comprised of telecom industry’s leaders like AT&T and Verizon also opposed the plan and said on Monday that a competitive marketplace is the way to ensure the country remains as a trailblazer in 5G technology.

    5G technology is expected to provide even faster speeds and almost unlimited Internet capacity when compared to the previous iterations of the wireless technology. It’s also essential for the further development of new technologies like the Internet of Things (IoT), self-driving cars, and virtual reality. AT&T and Verizon already finalized plans to introduce 5G service in limited sectors in 2018.

  • Over 500,000 Google Chrome Users Affected by Malicious Extensions

    Over 500,000 Google Chrome Users Affected by Malicious Extensions

    Google has just recently removed four extensions from the Chrome Web Store after they were discovered to be malicious. The extensions, which already had over 500,000 downloads, were used to carry out click fraud and SEO manipulation.

    The malicious extensions were discovered by researchers from ICEBRG, a Seattle-based internet security company when they investigated spikes in outbound traffic from a customer’s workstation. Upon verification, the researchers found that these outbound data transmissions were caused by a Google Chrome extension named HTTP Request Header. Apparently, the workstation was used to visit links that they suspected were advertising-related.

    The same ICEBRG researchers went on to discover three more malicious extensions that basically did the same thing as the HTTP Request Header: Nyoogle, Stickies, and Lite Bookmarks. ICEBRG then notified Google of its findings and the malicious extensions were removed from the Chrome Web Store.

    malicious chrome extensions

    In the past, malicious browser extensions have been used to infect the workstations of unsuspecting Chrome users with spyware or even malware.  At the moment, ICEBRG believes that the extensions they discovered may have been used to scam advertisers who pay on a per-click scheme by generating fake clicks using the infected workstations. However, it is likewise possible that the same malicious add-ons could be used to spy on anyone.

    In a report published on Friday, ICEBRG explained the risk malicious extensions may pose to browser users. “In this case, the inherent trust of third-party Google extensions, and accepted risk of user control over these extensions allowed an expansive fraud campaign to succeed. In the hands of a sophisticated threat actor, the same tool and technique could have enabled a beachhead into target networks.”

    Of course, it is not the first time that Google’s extension has been used for cyber attacks. On July and August of 2017, still-unidentified hackers managed to compromise the accounts of Chrome extension developers which were then used to automatically install extension updates capable of placing ads to sites visited by users.

    [Featured image via YouTube]

  • Google Introduces ‘Trending Searches’ and ‘Instant Answers’ to iOS App

    Google Introduces ‘Trending Searches’ and ‘Instant Answers’ to iOS App

    Google is giving users of Apple products greater functionality with the addition of Twitter-like features in a recent update to its iOS app. The mobile version of the search engine which was introduced in its Android app last year, now sports Trending Searches, a location based feature that lets iOS users know of the hottest searches in their location. In addition, the tech giant added Instant Answers to the app, a feature that gives some useful info at a glance.

    Trending Searches for iOS will have an opt-out feature

    With their iOS Google app updated, users will know the searches currently trending around them. According to The Tech Bulletin,  merely clicking on the app’s search box will display a list of trending searches made by people near a user’s location. However, it still remains unclear just how localized the coverage of the Trending Searches feature is.

    Thankfully, there is an opt-out option included in the iOS update. When Trending Searches was introduced on Android last year, it was met with criticisms with some users clamoring for Google to include an option for turning off the feature. While useful to some, there were users who found it annoying as it gave trending searches made by the masses instead of content specific to the user interests. Google relented by coming up with the opt-out option for people who wished to turn off the feature.

    Smarter Searches with Instant Answers

    In addition, Google made some improvements to the search experience by introducing what is called Instant Answers. Basically, the app anticipates what the user is trying to type and, even before keying in the complete search phrase, the answer is displayed along with some suggestions below the search box. And that happens even before the user hits the search button.

    According to Tech Crunch,  the answers come from Google’s facts database known as Knowledge Graph, which in turn, sources its data from CIA World Factbook and Wikipedia.

    [Featured Image via Pixabay]

  • Google Assistant SDK Announced

    Google today announced the release of the Google Assistant SDK. The SDK will allow developers to include Google Assistant interactions in their hardware prototypes.

    The Google Developers Blog says, “The Google Assistant SDK includes a gRPC API, a Python open source client that handles authentication and access to the API, samples and documentation. The SDK allows you to capture a spoken query, for example “what’s on my calendar”, pass that up to the Google Assistant service and receive an audio response. And while it’s ideal for prototyping on Raspberry Pi devices, it also adds support for many other platforms.

    To get started, visit the Google Assistant SDK website for developers, download the SDK, and start building. In addition, Wayne Piekarski from our Developer Relations team has a video introducing the Google Assistant SDK, below.”

  • Cloud Trace For Google Cloud Platform Gains Requested Features

    Google announced some new features and functionality for Cloud Trace for Google Clout Platform, which it released in beta last year.

    Based on user feedback, they’ve added automatic tracing and performance analysis for all App Engine projects, latency shift detection, the ability to use the Trace API to trace custom workloads, and UI tweaks for developer workflow.

    “Cloud Trace now automatically instruments Google App Engine applications,” explains product manager Sharat Shroff. “It continuously evaluates all App Engine requests and periodically analyzes each endpoint-level traces to identify performance bottlenecks and insights. It looks for suboptimal patterns in RPC calls and provides recommendations to fix them.”

    “Cloud Trace builds analysis reports for the most frequently used endpoints, and now we can use these reports to determine changes in your application’s latency,” Shroff says. “Using our latency shift detection algorithms, we can surface significant and minor changes in your application latencies when there’s a noticeable change. You can access this feature directly from the Analysis Reports tab within Cloud Trace.”

    You can now use the Cloud Trace API and Trace SDK to optimize the performance of custom workloads. The API can be used to add custom spans to a trace.

    Shroff details these features as well as the user interface changes in a blog post here.

    Image via Google