WebProNews

Tag: Machine Learning

  • Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup

    Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup

    Data science is one of the fastest growing segments of the tech industry, and Alteryx, Inc. is front and center in the data revolution. The Alteryx Platform provides a collaborative, governed platform to quickly and efficiently search, analyze and use pertinent data.

    To continue accelerating innovation, Alteryx announced it has purchased a startup with roots in the Massachusetts Institute of Technology (MIT). Feature Labs “automates feature engineering for machine learning and artificial intelligence (AI) applications.”

    Combining the two companies’ platforms and engineering will result in faster time-to-insight and time-to-value for data scientists and analysts. Feature Labs’ algorithms are designed to “optimize the manual, time-consuming and error-prone process required to build machine learning models.”

    Feature Labs makes its open-source libraries available to data scientists around the world. In what is no doubt welcome news, Alteryx has already committed to continued support of the open-source community.

    From the Press Release:

    “Feature Labs’ vision to help both data scientists and business analysts easily gain insight and understand the factors driving their business matches the Alteryx DNA. Together, we are helping customers address the skills gap by putting more powerful advanced analytic capabilities directly into the hands of those responsible for making faster decisions and accelerating results. We are excited to welcome the Feature Labs team and to add an engineering hub in Boston,” said Dean Stoecker, co-founder and CEO of Alteryx.

    “Alteryx maintains its leadership in the market by continuing to evolve its best-in-class, code-free and code-friendly platform to anticipate and meet the demands of the 54 million data workers worldwide2. With the addition of our unique capabilities, we expect to empower more businesses to build machine learning algorithms faster and operationalize data science,” said Max Kanter, co-founder and CEO of Feature Labs. “Feature engineering is often a time-consuming and manual process and we help companies automate this process and deploy impactful machine learning models.”

  • Reddit Announces New Rules to Fight Bullying and Harassment

    Reddit Announces New Rules to Fight Bullying and Harassment

    Shortly after Twitter announced new measures to protect users from online abuse, Reddit has unveiled new rules designed to help moderators combat bullying and abuse on their platform.

    Reddit moderator landoflobsters, explained the new policy in a post to /r/announcements. In the post, landoflobsters detailed how the previous policy required bad behavior to be “continued” or “systematic” before moderators had authority to take action. Similarly, the threshold for someone who feared for their safety due to harassment was set too high, nor was it clear whether the same rules applied to individuals and groups. The end result was that Reddit quickly became a haven for trolls.

    With the site’s new rules, moderators will be able to act much faster to protect users. In particular, the new rules take a “big picture” approach to moderating.

    “The changes we’re making today are trying to better address that, as well as to give some meta-context about the spirit of this rule: chiefly, Reddit is a place for conversation,” said landoflobsters. “Thus, behavior whose core effect is to shut people out of that conversation through intimidation or abuse has no place on our platform.”

    Reddit will also accept reports of abuse from bystanders, rather than requiring the individual being harassed to report it. The goal is to reduce any additional burden on someone who may already be suffering distress.

    The announcement also goes on to explain that Reddit will be using more machine learning tools to sort and prioritize human reports. While humans will still make the decisions about whether behavior rises to the level requiring banning, machine learning will make it easier for human moderators to deal with the volume.

    Towards the end of the announcement, landoflobsters cautions that “as with any rule change, this will take some time to fully enforce. Our response times have improved significantly since the start of the year, but we’re always striving to move faster. In the meantime, we encourage moderators to take this opportunity to examine their community rules and make sure that they are not creating an environment where bullying or harassment are tolerated or encouraged.”

  • Infosys & SAP Announce Alliance to Accelerate Clients’ Enterprise Digital Transformation

    Infosys & SAP Announce Alliance to Accelerate Clients’ Enterprise Digital Transformation

    SAP SE announced a collaboration program with Amazon Web Services (AWS), Google Cloud and Microsoft Azure at SAPPHIRE NOW 2019. The project, named “Embrace” will also include global strategic service partners (GSSP).

    At the same time, Infosys—“a global leader in next-generation digital services and consulting”—has announced Innov8, a new program designed to help “clients transform their business model to one based on predictable OPEX-based costs.”

    According to a press release Infosys issued Tuesday, the two companies are planning a collaborative alliance, to bring the benefits of Embrace and Innov8 to customers “and offer flexible points of entry to the SAP environment for both existing and new cloud users, all within one comprehensive end-to-end business solution.”

    With more than 70 ready-to-deploy artificial intelligence, machine learning, blockchain, analytics and Internet of Things use cases, Innov8 provides a way for companies to invest, innovate and build intelligent enterprises.

    From the Infosys Press Release:

    Dinesh Rao, Executive Vice President, Infosys, said, “Navigating the cloud ecosystem requires a structured strategy that provides a consolidated view into a company’s overall transformation journey. Through Innov8, we are focused on leveraging our industry knowledge and experience to accelerate the delivery of business solutions. Through this collaboration, we are focusing on ensuring that our clients are able to rapidly adopt tomorrow’s business models today.”

    David Robinson, Senior Vice President, SAP Cloud Business Group and Global Lead, Embrace program at SAP said, “SAP is excited about its plans to partner with Infosys to help clients invest in purposeful innovation to build their intelligent enterprise. Innov8 for Embrace leverages Infosys’ industry knowledge and expertise on SAP and cloud technologies. This is a platform that is delivered on a cloud hyperscale environment with SAP digital solutions delivering end-to-end business outcomes at accelerated pace. We couldn’t be more excited.”

    David McIntire, IT Services Research Director at NelsonHall, said “The value of SAP S/4HANA adoption extends beyond IT and into transforming how businesses operate. Innov8 for Embrace has the potential to combine industry-tailored intelligence, applications and processes with simplified OPEX pricing and cloud hosting into an integrated offering aimed at helping companies maximize the business value of adopting SAP S/4HANA.”

  • Our Machine Learning Platform Helps Brands Retain Their Customers, Says Medallia CEO

    Our Machine Learning Platform Helps Brands Retain Their Customers, Says Medallia CEO

    “We’re a platform that helps some of the biggest brands in the world really understand their customers in live time and communicate with them while they’re in an experience,” says Medallia CEO Leslie Stretch. “Instead of a survey after they’ve left a hotel, they communicate them while they’re there, check in on the experience and improve it. This helps them retain their customer and perhaps sell them another experience. It’s this machine learning platform that does that.”

    Leslie Stretch, President and CEO of Medallia, discusses the company’s IPO and how the company uses machine learning to react to customer signals in real-time rather than after they leave an experience in an interview on CNBC:

    Our Machine Learning Platform Helps Brands Retain Their Customers

    We’re a Silicon Valley tech company. We’re a platform that helps some of the biggest brands in the world really understand their customers in live time and communicate with them while they’re in an experience. So instead of a survey after they’ve left a hotel, they communicate them while they’re there, check in on the experience and improve it. This helps them retain their customer and perhaps sell them another experience. It’s this machine learning platform that does that.

    Anything is a signal to us, a survey, an IOT signal, a transaction, somebody buys something, they have a bad experience at the pool, or they’re on an airline and they don’t quite like the service that they’re getting, they can feed that back immediately instead of waiting until the experience is finished. We’re all about platform and signal. We’re very different from the survey companies, the feedback companies, which are the old experience economy companies. It’s the application of deep Silicon Valley technology to the problem.

    The Customer Is At the Center of Every Digital Transformation

    Customer experience has become really a major theme for every big brand in the world today. I also think that our technology is innovative and very different. The application of machine learning and the platform and just the operationalization of a private Silicon Valley company are really what I’ve done in the past. Just bringing basic blocking and tackling to go to market and marketing and building up the salesforce. So very simple and taking the story out to a bigger market.

    We actually just signed a revenue share partnership with Salesforce. We have a partnership for Marketing Cloud with Adobe. They’re great alliances for us. We can present our machine learning, our unstructured data, into their Marketing Cloud, Sales Cloud, and Service Cloud. That’s brand new for us this year. It’s great to go to market with leaders like that. Both Adobe and Salesforce completely understand the customer is at the center of every digital transformation and we are at the center of that.

    It’s Not For the Faint-Hearted, But We Invested a Ton In It

    We spent more than a half a billion dollars building this plot platform. That sets us apart from the traditional simple survey vendor. We’ve spent a ton of money on the privacy layer and on the security layer. We’ve worked already for a decade with some of the biggest brands in the world whose customer information is precious. We’re HIPAA certified for healthcare as well. So we take that very seriously. It’s not for the faint-hearted, but we invested a ton in it and it’s worth it.

    Our Machine Learning Platform Helps Brands Retain Their Customers, Says Medallia CEO Leslie Stretch
  • HPE CEO: Reason For Acquiring Cray – The Data Around Us is Exploding

    HPE CEO: Reason For Acquiring Cray – The Data Around Us is Exploding

    Hewlett Packard Enterprise and Cray have announced that the companies have entered into a definitive agreement under which HPE will acquire Cray for $35.00 per share in cash, in a transaction valued at approximately $1.3 billion, net of cash. “The main reason why we decided to pursue this acquisition is that the data around us is exploding,” Says HPE CEO Antonio Neri. “That data has value and the need to process that data faster continues to grow.”

    Antonio Neri, CEO of Hewlett Packard Enterprise, discusses the company’s intent to acquire  global supercomputer leader Cray, in an interview on Bloomberg Technology:

    Reason For Acquiring Cray – Data Around Us is Exploding

    I’m super excited about the announcement today. The main reason why we decided to pursue this acquisition is that the data around us is exploding. That data has value and the need to process that data faster continues to grow. The need for high-performance computing is one element of processing that data faster. The combination of great technologies with Hewlett Packard Enterprise portfolio, which include both HPE Apollo and SGI, give us a unique set of capabilities to get us to the right business outcome from the data. What we are talking about is (improved outcomes) for machine learning, AI, as well as big data and intensive workloads. Cray brings these capabilities.

    Cray has two-thirds of its business coming from the government side and one-third from the commercial side. Hewlett Packard Enterprise is the opposite with two-thirds from the commercial side and one-third from the government side. Obviously, the government has already requested for Cray to build Exascale computing. That’s on the basis of the foundation technologies that Cray has developed for some time which is what we call the interconnect fabric. For us, that level of innovation is important to scale our portfolio and continue to enter new markets like oil and gas, manufacturing, as well as academia.

    Thirteen Acquisitions Under CEO – All Very Successful

    I have done now thirteen acquisitions with this one. We have had an incredible discipline based on return on invested capital where we have acquired (important) intellectual property as well as bringing talent to the organization. Each of them has been very successful including Aruba Networks as well as what I call the SGI acquisition, Nimble Storage, and so forth. They have all been very successful.

    When you think about the type of business it has obviously been a little bit lumpy on the Cray side but that’s because it takes time to build the systems with CAPEX upfront and then acceptance for the backend to recognize revenue. We believe the combinational of Cray added to our scale which is significantly larger will smooth the transition from the revenue profit perspective. It will also limit the CAPEX investment because both companies have similar capabilities and now we can use both in a scalable way that we couldn’t do before.

    HPE CEO Antonio Neri: Reason For Acquiring Cray – Data Around Us is Exploding

    Also Read:

    Next Frontier: Edge Centric, Cloud-Enabled, Data-Driven, Says HPE CEO
    Extracting Value From Data is a Massive Opportunity, Says Hewlett Packard Enterprise CEO
  • Carbon Black Uses AI to Analyze 500 Billion Daily Security Events, Says CEO

    Carbon Black Uses AI to Analyze 500 Billion Daily Security Events, Says CEO

    “Carbon Black is analyzing 500 billion security events across the globe every single day,” says Carbon Black CEO Patrick Morley. “Of course, you can’t do that with people. You have to do that with a number of techniques. We certainly leverage the compute capability of the cloud. Then we apply AI and machine learning models to that. It allows us to see patterns across the globe that help many many companies stop the bad guys.”

    Patrick Morley, CEO of Carbon Black, discusses how their company uses AI and machine learning to analyze in real-time 500 billion security events daily in an interview on Bloomberg Technology:

    China is the Number One Nation Driving Cyber Attacks

    As a cybersecurity company, we have an interesting relationship with certain nations around the world. This is particularly true with those that are very active from a cyber standpoint. China, in particular, has statistically been shown to be the number one nation across the globe that is driving cyber attacks. So our relationship with China is a different relationship than many other public companies across the globe. We don’t actively sell into the market because we are helping many companies actually protect themselves from attacks that are generated out of China.

    As I tell all of our employees we are building a company for the long term. Our stock is going to be impacted by things we control and many things we don’t control. When I look at my app and I see red everywhere it’s certainly disturbing. Obviously, that will impact companies that are going to buy my product eventually. If that has an impact on other public companies and private companies, it will impact us eventually.

    Cyber is One of the Most Interesting Spaces in Tech

    We gave (investors) a consistent outlook in Q2. Analysts reacted positively which is good. Again, we are building for the long term a company that matters in cyber. I think cyber is one of the most interesting spaces in tech right now because of everything around us. We come back to cyber again and again.

    If you look at all the news about Facebook cyber is in it. If you look at some of the geopolitical issues in Europe and in the U.S., cyber comes in. It’s an important area and we are a new guard of companies helping to change it and make it better and more effective for companies. We are building value around the company.

    Uses AI to Analyze 500 Billion Security Events Per Day

    Some of those (competing) providers (such as Cisco and Fortinet) work in a different part of the market than we do. It’s a big market. It’s a $100 billion market that’s going through fundamental change. We do provide a platform that does compete (directly) with some of the traditional players such as Semantic and others. The way we compete is we are based on one core principle. If you look at where the long term trend of where the world is going you need to leverage the power of data in order to figure out what’s happening. We leverage data in a way that allows us to see and to stop the adversary in ways that traditional products can’t.

    Carbon Black is analyzing 500 billion security events across the globe every single day. Of course, you can’t do that with people. You have to do that with a number of techniques. We certainly leverage the compute capability of the cloud. Then we apply AI and machine learning models to that. It allows us to see patterns across the globe that help many many companies stop the bad guys.

    Carbon Black Uses AI to Analyze 500 Billion Daily Security Events
  • How WeWork is Using Technology to Revolutionize Office Space Worldwide

    How WeWork is Using Technology to Revolutionize Office Space Worldwide

    “We open 15 to 20 buildings a month,” says WeWork CTO Shiva Rajaraman. “Anything we can use to automate or augment a person through machine learning we’re taking all that data in one central place and starting to create an engine around that. That’s key to successful scaling today. When we think about enterprise we sort of step back and say what’s our Google Analytics for commercial space?”

    Shiva Rajaraman, Chief Technology Officer of WeWork, discusses how WeWork is using technology to revolutionize office space worldwide in an interview on Bloomberg:

    How Do We Offer Space As a Service?

    There are three capabilities when we think about WeWork. One is how do we offer space as a service? If you just think about it’s really basic. Are you looking for what location do you need? Where do you need it? How long do you need it? Are there different pricing models for it? One of the things we’ve done is effectively taken all of this space and put it into a big database and we start to shape it based on what we see out there in the market. Some of that is just pricing automation at the end of the day. Some of it is how do we automate that supply chain of delivering a building?

    We open 15 to 20 buildings a month. Anything we can use to automate or augment a person through machine learning we’re taking all that data in one central place and starting to create an engine around that. That’s key to successful scaling today. The biggest technically challenging thing is operational scale. If you step back you don’t want a lot of variability. You want to step back and say, “Hey, can I deliver this building on time at quality as people need it?” That’s where you need operational technology that really works in a way that normally construction has not worked in the past.

    What’s Our Google Analytics for Commercial Space?

    One of the key things on the strategy side is that as we see this demand and we start to get critical mass in different areas can we disrupt the business model a little bit? Let me give you an example. If you take someone like GE Health in Seoul, South Korea, they had underutilized real estate. We redesigned that so they can use it in a more flexible way. We also created a new membership called the City Pass which gives all of their employee’s access to WeWork throughout Seoul. Now they can go where they’re more productive. One of the key things we’re looking at right now is what’s a density that translates to interesting memberships that allow people to be more productive?

    Let’s talk about the M&A that’s created a fabric that we can start to offer to enterprises. When we think about enterprise we sort of step back and say, “What’s our Google Analytics for commercial space?” Can we help these enterprises create a good workplace experience through things like room booking (service) all the way to understand how they use space so they can come and use WeWork on demand if they need it. We can also help them grow in the future if they’re looking at new markets to expand into.

    How WeWork is Using Technology to Revolutionize Office Space Worldwide


  • Machine Learning Should Be Used to Deliver Great Brand Experiences, Says PagerDuty CEO

    Machine Learning Should Be Used to Deliver Great Brand Experiences, Says PagerDuty CEO

    PagerDuty began trading on the New York Stock Exchange for the first time this morning and is now trading at more than 60% above their IPO price of $24. That gives the company a market capitalization of more than $2.7 billion. PagerDuty offers a SAAS platform that monitors IT performance. The company had sales of $118 million for its last fiscal year, up close to 50% over the previous year.

    The company uses machine learning to inform companies in real-time about technical issues. “Our belief is that machine learning and data should be used in the service of making people better, helping people do their jobs more effectively, and delivering those great brand experiences every time,” says PagerDuty CEO Jennifer Tejada. “PagerDuty is really about making sure that our users understand that this could be a good thing, being woken up in the middle of the night if it’s for the right problem. It’s a way that can help you deliver a much better experience for your customers.”

    Jennifer Tejada, CEO of PagerDuty, discusses their IPO and how machine learning should be used to deliver great brand experiences in an interview on CNBC:

    It’s Gotten Harder for Human’s to Manage the Entire IT Ecosystem

    If you think about the world today, it’s an always-on world. We as consumers expect every experience to be perfect. Every time you wake up in the morning, you order your coffee online, you check Slack to communicate with your team, and maybe you take a Lyft into work. Sitting behind all of that is a lot of complexity, many digital and infrastructure based platforms, that don’t always work together the way you’d expect them to. As that complexity has proliferated over the years and because developers can deploy what they like and can use the tools that they want it’s gotten harder for human beings to really manage the entire ecosystem even as your demands increase.

    You want it perfect, you want it right now and you want it the way you’d like it to be. PagerDuty is the platform that brings the right problem to the right person at the right time. We use machine learning, sitting on ten years of data, data on humans behavior and data on all these signals there that are happening through the system, and it really helps the developers that sit behind these great experiences to deliver the right experience all the time.

    Machine Learning Should Be Used to Deliver Great Brand Experiences

    Going public is the right time for us right now because there’s an opportunity for us to deliver the power of our platform to users all over the world. We are a small company and we weren’t as well-known as we could be and this is a great opportunity to extend our brand and help developers and employees across teams and IT security and customer support to deliver better experiences for their end customers all the time.

    At PagerDuty we take customer trust and user trust very seriously. We publish our data policy and we will not use data in a way other than what we describe online. We care deeply about the relationship between our users in our platform. Our belief is that machine learning and data should be used in the service of making people better, helping people do their jobs more effectively, and delivering those great brand experiences every time. PagerDuty is really about making sure that our users understand that this could be a good thing, being woken up in the middle of the night if it’s for the right problem. It’s a way that can help you deliver a much better experience for your customers.


  • Toyota P4 Concept Car Introduced with Guardian Technology – May Save Lives by Ten-Fold

    Toyota P4 Concept Car Introduced with Guardian Technology – May Save Lives by Ten-Fold

    Toyota has introduced the hybrid P4 concept car that includes increased accident protection that is much “smarter” than its predecessor. Toyota says that with greater computing power, its systems can operate more machine learning algorithms in parallel for faster learning. They say it can process sensor inputs faster and react more quickly to the surrounding environment.

    The technology was created by the Toyota Research Institute (TRI) as part of their autonomous vehicle R&D. P4 adds two additional cameras to improve situational awareness on the sides and two new imaging sensors – one facing forward and one pointed to the rear – specifically designed for autonomous vehicles.

    Toyota Research Institute Rolls Out P4 Automated Driving Test Vehicle at CES.

    Additionally, the imaging sensors feature new chip technology with high dynamic range. The radar system has been optimized to improve the field of view, especially for close range detection around the vehicle perimeter. The LIDAR sensing system with eight scanning heads carries over from the previous test model, Platform 3.0, and morphs into the new vehicle design.

    “If we are able to reduce technology that theoretically can reduce fatalities by ten-fold or perhaps even a hundred-fold we can make consumers and society safer,” says Bob Carter, Toyota North America Executive Vice President.

    Bob Carter, North America Executive Vice President of Toyota, discussed the new Guardian technology at length on Fox Business:

    Toyota P4 Concept Car With Guardian Technology

    This is I believe our fourth or fifth year where we’re introducing our newest technology, particularly in the autonomous driving area, here at the Consumer Electronics Show. The vehicle we are introducing today is a concept car called P4. It’s our fourth platform. We are introducing what we call our Guardian technology. Guardian is considered the co-pilot sitting in the passenger seat for you.

    We are going to demonstrate to the media today where we actually experience an accident on Interstate 80. What the Guardian technology does, it’s an offshoot of development for fully autonomous, is it monitors all the conditions around the car all the time. In the example of this one unfortunate accident that nobody was hurt in one car drifted into its lane into another car and then pushed it into the guardrail. This is a very typical situation.

    Guardian Technology Takes Control Prior to Accidents

    Our Guardian technology senses that and then momentarily takes the controls from the driver. This includes acceleration, braking, and steering. It can navigate the car out of the area of the accident and then immediately hand back the controls to the driver. The end result is that the driver is still in control of the car. He has the enjoyment of driving, yet in in unforeseen circumstance, technology can take over to avoid the accident.

    It was developed by the Toyota Research Institute that we have in the Silicon Valley. They’re working on a number of different technologies for the future that we believe are really going to enhance the safety of society in the future. Unfortunately, it’s are something that does happen. In North America last year there were 40,000 fatalities on our roads.

    Guardian Technology May Reduce Fatalities by Ten-Fold

    If we are able to reduce technology that theoretically can reduce fatalities by ten-fold or perhaps even a hundred-fold we can make consumers and society safer. In fact, we are so convinced that this technology is the correct path for the future that we are opening up to other auto manufacturers. We would love to see every vehicle on the road today have this sort of technology available for consumers.

    Last year, there were 17.2 million vehicles sold. Approximately one percent of those were full battery electric vehicles. We have a very robust system we use with hybrids which is a combination of our gasoline engines and electric. These have been on the market since 1997. We think it is going to take some time for the market to advance but later on next decade we believe electrification will become mainstream in the North American market.


  • SAP Massively Going for Expansion Into Multi-Cloud World, Says CTO

    SAP Massively Going for Expansion Into Multi-Cloud World, Says CTO

    “We’re massively going for the expansion into this multi-cloud world,” says Björn Goerke, SAP CTO & President of the SAP Cloud Platform. “We strongly believe that the world will remain hybrid for a number of years and we’re going in that same direction with the SAP Cloud Platform.”

    Björn Goerke, SAP CTO & President SAP Cloud Platform, recently discussed the future of the SAP Cloud Platform in an interview with Ray Wang, the Founder & Chairman of Constellation Research:

    Massively Going for Expansion Into Multi-Cloud World

    We’re massively going for the expansion into this multi-cloud world. We strongly believe that hybrid clouds will play a major role in the coming years. If you also follow what the hyper scalars are doing, Amazon was the last one to announce an on-premises hybrid support model. We strongly believe that the world will remain hybrid for a number of years and we’re going in that same direction with the SAP Cloud Platform.

    We announced partnerships with IBM and ANSYS already and there will be more coming. We’re totally committed to the multi-cloud strategy driving the kind of choice for customers that they demand. But then what we’re more and more focusing on is business services and business capabilities. It’s about micro services as well. It’s really about business functionality that customers expect from SAP. We are an enterprise solutions company.

    It’s Really About No Code and Low Code Environments

    With our broad spectrum of 25 industries we support all the lines of business within a corporation from core finance to HR to procurement, you name it. We are focused on a high level of functionality that we can expose via APIs and micro services on a cloud platform to allow customers to quickly reassemble and orchestrate customer specific differentiating solutions.

    There is no other company out there in the market that has the opportunity to really deliver that on a broad scale worldwide to our corporate customers.

    That’s where we’re heading and that’s where we’re investing. We’re working on simplifying the consumption of all of this. It’s really about no code and low code environments. You need to be able to plug and play and not always force people to really go down into the trenches and start heavy coding.

    SAP Embedding Machine Learning Into Applications

    Beyond that machine learning is all over and on everybody’s mind. What we’re doing is making sure that we can embed machine learning capabilities deep into the application solutions. It can’t be that every customer needs to hire dozens and even hundreds of data scientists to figure these things out.

    The very unique opportunity that SAP has is to take our knowledge in business processes, take the large data sets we have with our customers, and bring machine learning right into the application for customers to consume out of the box.

    RPA is a big topic as well of course. We believe that 50 percent of ERP processes you can potentially automate to the largest part within the next few years. We are heavily investing in those areas as well.

    Focused on Security, Data Protection, and Privacy

    Especially if you think about the level of connectivity and companies opening up their corporate environments more and more, clouds being on everybody’s mind, and the whole idea to make access to information processes available to everybody in the company and in the larger ecosystem at any point in time from anywhere, of course, that raises the bar that security has to deliver. So it’s a top of mind topic for everybody.

    There are a lot of new challenges also from an architectural perspective with how these things are built and how you communicate, We have a long-standing history as an enterprise solution provider to know exactly what’s going on there. There’s security, there are data protection and privacy that companies have to comply with these days. I think we’re well positioned to serve our customers needs there.

    https://youtu.be/JwXU89MrdaA


  • How Pandora Uses AI To Power Music Discovery

    How Pandora Uses AI To Power Music Discovery

    Pandora is considered the world’s most powerful music discovery platform, using its proprietory algorithm to determine which music to play to a subscriber at any given time. The question is how do they do it so successfully?

    Pandora’s Data Science Manager says that it’s not about AI and machine learning replacing humans. He says it’s about those two working together.

    Andreas Ehmann, Manager of Research and Data Science at Pandora, recently discussed how Pandora uses AI and machine learning via its Music Genome Project to power Pandora:

    Using Data to Teach Computers How to Listen to Music

    Music is an art form and behind it are certain objective properties, what instruments went into making it, the overall sound style, and artistic expressions of intent. It’s not just about objectively understanding music itself. We’re using that data to teach computers how to listen to music.

    There are about 450 traits that we’re looking at for every song. Is it a breathy voice like Bjork or perhaps a smooth vocal like Shaday? Is there a swing or a shuffle feel to the beat or is there a lot of syncopation? There’s always this talk about AI and machine learning replacing humans when in reality it’s a loop. I think really a lot of the future is those two working together.

    Challenge is Determining When to Introduce a New Song

    The machine can tell you pretty well that someone’s singing in a piece of music. A machine might have a little harder time telling you what language that person is singing. The type of music we listen to and how we behave when we listen to it can really tell us a lot about ourselves. What do you do the first time you’ve ever heard a song? Are you open to it do you follow the mainstream or do you have very niche interest? Underlying all of those core behaviors are some really fundamental personality traits.

    The challenge with music discovery is when is the right moment to hear a new song. That’s where you have to start learning about people. If you’re working and very concentrated you want to be listening to something familiar and hearing something you’ve never heard before is oftentimes a bit jarring.

    Biggest Misconception About Music Streaming and Data

    The biggest misconception about music streaming and data is that everything behind it is an algorithm. When we fundamentally think about how the algorithms work we have to think back to how we used to discover music before we had streaming services. We would listen to the radio or we would learn from our friends.

    What’s at the heart of a lot of these algorithms is actually connecting you to all of the other people out there that you’re never ever going to meet. The trick then becomes how do you combine all of those sources of recommendations with your past listening behavior and with your current circumstance to pick the best song for you right now?

    More Pandora News:

    SiriusXM CEO Jim Meyer: Audio is Thriving Like Never BeforeSiriusXM

    SiriusXM CEO: Pandora to Make Sirius Subscribers More Sticky

    Pandora Co-Founder: Apple, Amazon, Google is Going to Rue the Day They Let Pandora Get Away

  • Rand Hindi: Human-Like Artificial Intelligence is Never Going to Exist

    Rand Hindi: Human-Like Artificial Intelligence is Never Going to Exist

    Dr. Rand Hindi says that without emotional intelligence machines will never be able to obtain human-like artificial intelligence. Reminiscent of Data on Star Trek Next Generation, Hindi says that despite the impressive ability of machines to learn from other machines and to solve logic problems better than humans, most decisions humans make are actually emotionally driven and machines simply don’t have an emotional IQ.

    Dr. Rand Hindi, CEO of cutting edge AI technology company Snips, recently talked about the future of AI at LinkedIn Talent Connect:

    Human-Like AI is Never Going to Exist

    I want to talk to you about the reason why I believe that human-like artificial intelligence is never going to exist and what that means for the future of work. Artificial intelligence is the ability to reproduce human behavior in a machine. That’s it. Take what a human can do intelligently, put it in a machine and you’ve got artificial intelligence.

    Within AI you’ve got one type of way to achieve this which is called machine learning. The idea of machine learning is that you’re effectively teaching machines to reproduce the behavior by giving it examples. It’s a little bit like a kids book where you have pictures of animals and the name of the animal is written and then after seeing a few pictures of horses your kids know how to recognize horses. It’s exactly the same thing in machines.

    Machine Learning is a Very Big Deal

    Machine learning is a very big deal because up until now when you wanted to automate something a human had to first understand what was going on, then sit down and program a machine to do that. Automation was limited to what humans were able to understand. With machine learning all you need is data collected from what you’re trying to automate and the machine does everything else. You no longer need a human expert in the loop.

    Within machine learning there is one type of algorithm that’s called deep learning. Deep learning is a branch of machine learning which is a branch of artificial intelligence and you could consider all three to be interchangeable today, but that’s going to change in the future. I don’t believe that word artificial intelligence is actually going to be used in marketing in the next few years.

    Deep Learning Has Been a Huge Revolution

    How have we been using deep learning? Deep learning has been a huge revolution. We see it happening for self-driving cars. We see it happening for medicine. Medicine is a very important use case for artificial intelligence because we have today AI that can diagnose x-rays or MRIs better than humans can.

    You’ve probably heard about Alexa, the voice assistant from Amazon. This is one of the fastest growing consumer products ever. Rumors are that one in six Americans uses that. This was not possible before because deep learning was not enabling us to talk to machines as naturally speaking.

    Is AI Getting Out of Control?

    Let me tell you about the world champion playing against Google’s artificial intelligence at the game of Go. The game of Go was considered to be extremely complicated for an AI to beat because the number of different combinations meant that the only way for a machine to beat a human is to actually learn how to play the game. We thought this was still ten years in the future. The way that they made this work was really interesting.

    They took one artificial intelligence and they made it play against another one. So one was playing the white side, one was playing the black side, but the trick is that every time one of the AI played a move the other one gave it feedback on that move. By mutually reinforcing each other by playing millions and millions of games, eventually they learned how to play the game better than any human.

    At the time, when they played against a world champion the world champion won one out of five games so this was amazing. But people felt a little bit reassured, they were like a 20 percent chance of surviving AI that’s still not bad! However, the same AI kept on learning. Today, not a single human can beat that AI at a single game. But there is more, there is a new version of this AI that beats that AI that beats every human at every game.

    When I saw that I was like, oh my god, this is just getting out of control, this is getting out of our hands. But what you need to understand is that everything I just talked about, however impressive it is, is still something that is called narrow artificial intelligence. Effectively, those machines are able to do one thing, perhaps do it better than a human, but they’re only doing this one specific thing. The AI that played the game of Go that was a breakthrough, but it doesn’t know how to do anything but play the game of Go.

    Machines Will Never Have Emotional Intelligence

    Now people are working on something that’s called general artificial intelligence. This idea that a machine could solve any logical task, that it could reason, that it could transfer the learning it had from something to something else. This is major because if you can have a general form of intelligence and reasoning then potentially machines could do anything that seems like intelligence.

    But this is still not what you see in movies. What you see in movies is a very human-like artificial intelligence. It’s not just reasoning and logic, it also includes the ability to emotionally connect with humans. So artificial human intelligence is really this combination of logical intelligence and emotional intelligence. It’s IQ plus EQ. If you only take IQ into the equation you don’t end up having human intelligence.

    Why is emotional intelligence important? It’s a way that as humans we can solve paradoxes. A paradox is a mathematical problem for which there is no logical solution. If a machine is not able to have emotional intelligence it will never be able to solve logical traps, which means as humans we can use our EQ to find traps for machines that have very high IQ.

    You might be thinking that machines are building emotional intelligence as well. We see all those amazing robots, people develop feelings for those robots as well. However, I believe that you will never have true emotional intelligence in machines.

    EQ First Requires Artificial Consciousness.

    Emotional intelligence first requires artificial consciousness. They will also need to feel emotions. This is very different than pretending to have emotions. It’s very easy for me to learn that when someone is smiling that person is probably happy and it’s very easy for me to smile back. But hey, I can be smiling but it doesn’t mean I feel happy right now. There’s a big difference between perception of emotion, between display of emotion and feeling emotions.

    We know that humans who don’t feel emotions are incapable of making decisions on a daily basis. They can do math, they can solve mathematical puzzles, so they have very high IQ potentially. But if you ask them what they would like for lunch they cannot answer because there is no algorithm to answer that question. Given that as humans most of our decisions are emotionally driven. Let’s be honest, we use the data to back it up but we make emotional decisions mostly. A machine that doesn’t have an emotional intelligence will never be seen as a human-like type of intelligence.

    What I’m trying to get to here is that yes, you will have an AI that has an IQ of five billion and yes every logical task is potentially doable by a machine, but humans will have the monopoly on emotional intelligence. Humans alone will be able to do emotionally driven tasks and so rather than think about machines replacing humans we really have to start thinking about humans and machines working together.

    How can we leverage the horizontal emotional intelligence of humans with the powerful mechanical logical intelligence of machines? Rather than try to build an AI that replaces humans completely, why don’t we start building an AI that actually works with a human in a very natural and very intuitive way.

  • How LinkedIn is Using Machine Learning to Determine Skills

    How LinkedIn is Using Machine Learning to Determine Skills

    One of the more interesting reveals that Dan Francis, Senior Product Manager for LinkedIn Talent Insights, provided in a recent talk about the Talent Insights tool is how LinkedIn is using machine learning to determine skills of people. He says that there are now over 575 million members in the LinkedIn database and there are 35,000 standardized skills in LinkedIn’s skills taxonomy. The way LinkedIn is figuring out what skills a member has is via machine learning technology.

    Dan Francis, Senior Product Manager, LinkedIn Talent Insights, discussed Talent Insights in a recent LinkedIn video embedded below:

    LinkedIn Using Machine Learning to Determine Skills

    The skills data in Talent Insights comes from a variety of sources, mainly from a member’s profile. There are over 35,000 standardized skills that we have in LinkedIn’s skills taxonomy, and the way we’re figuring out what skills a member has is using machine learning. We can identify skills that a member has that’s based on things that they explicitly added to their profile.

    The other thing that we’ll do is look at the text of the profile. There’s a field of machine learning called natural language processing and we’re basically using that. It’s scanning through all the words that are on a member’s profile, and when we can determine that it’s pertaining to the member, as oppose the company or another subject, we’ll say okay, we think that this member has this skill. We also look at other attributes, like their title or the company, to make sure they actually are very likely to have that skill.

    The last thing that we’ll do is look at the skills a member has and figure out what are skill relationships. So as an example, let’s say that a member has Ember, which is a type of JavaScript framework, since we know that they know Ember, they also know JavaScript. So if somebody’s running a search like that, we’ll surface them in the results. I think that the most important reason why this is helpful and the real benefit to users of the platform is when you’re searching, you want to get as accurate a view of the population as possible. What we’re trying to do is look at all the different signals that we possibly have to represent that view.  

    575 Million People on LinkedIn Globally and Adding 2 Per Second

    Today, LinkedIn has over 575 million members that are on the platform globally. This is actually growing at a pretty rapid clip, so we’re adding about two members per second. One of the great things about LinkedIn is that we’re actually very well represented in terms of the professional workforce globally. If you look at the top 30 economies around the world, we actually have the majority of professionals in all of those economies.

    LinkedIn is the World’s Largest Aggregator of Jobs

    I think there’s often a perception that most of the data’s directly from LinkedIn, stuff that’s posted on LinkedIn and job status is one notable exception to that. Plenty of companies and people will post jobs on LinkedIn, and that’s information that does get surfaced. However, we’re also the world’s largest aggregator of jobs. At this point there are over 20 million jobs that are on LinkedIn.

    The way that we’re getting that information is we’re working with over 40,000 partners. These are job boards, ATS’s, and direct customer relationships. We’re collecting all of those jobs, standardizing them, and showing them on our platform. The benefit is not just for displaying the data in Talent Insights, the benefit is also when members are searching on LinkedIn.com, we’re giving them as representative a view of the job market as possible.

  • AWS CEO Announces Textract to Extract Data Without Machine Learning Skills

    AWS CEO Announces Textract to Extract Data Without Machine Learning Skills

    AWS CEO Andy Jassy announced Amazon Textract at the AWS re:Invent 2018 conference. Textract allows AWS customers to automatically extract formatted data from documents without losing the structure of the data. Best of all, there are no machine learning skills required to use Textract. It’s something that many data-intensive enterprises have been requesting for many years.

    Amazon Launches Textract to Easily Extract Usable Data

    Our customers are frustrated that they can’t get more of all those text and data that are in documents into the cloud, so they can actually do machine learning on top of it. So we worked with our customers, we thought about what might solve these problems and I’m excited to announce the launch of Amazon Textract. This is an OCR plus plus service to easily extract text and data from virtually any document and there is no machine learning experience required.

    This is important, you don’t need to have any machine learning experience to be able to use Textract. Here’s how it generally works. Below is a pretty typical document, it’s got a couple of columns and it’s got a table in the middle of the left column.

    When you use OCR it just basically captures all that information in a row and so what you end up with is the gobbledygook you see in the box below which is completely useless. That’s typically what happens.

    Let’s go through what Textract does. Textract is intelligent. Textract is able to tell that there are two columns here so actually when you get the data and the language it reads like it’s supposed to be read. Textract is able to identify that there’s a table there and is able to lay out for you what that table should look like so you can actually read and use that data in whatever you’re trying to do on the analytics and machine learning side. That’s a very different equation.

    Textract Works Great with Forms

    What happens with most of these forms is that the OCR can’t really read the forms or actually make them coherent at all. Sometimes these templates will kind of effectively memorize in this box is this piece of data. Textract is going to work across legal forms and financial forms and tax forms and healthcare forms, and we will keep adding more and more of these.

    But also these forms will change every few years and when they do something that you thought was a Social Security number in this box turns out now not to be a date of birth. What we have built Textract to do is to recognize what certain data items or objects are so it’s able to tell this set of characters is a Social Security number, this set of characters is a date of birth, this set of characters is an address.

    Not only can we apply it to many more forms but also if those forms change Textract doesn’t miss a beat. That is a pretty significant change in your capability in being able to extract and digitally use data that are in documents.

  • Etsy CEO: Machine Learning is Opening Up a Whole New Opportunity

    Etsy CEO: Machine Learning is Opening Up a Whole New Opportunity

    Etsy CEO Josh Silverman says that “machine learning is opening up a whole new opportunity” for the company to organize 50 million items into a discovery platform that makes buying an enjoyable experience and also is profitable for sellers.

    Josh Silverman, CEO of Etsy, recently talked about their much-improved business and why it is working so well with Jim Cramer on CNBC:

    Our Mission is Keeping Commerce Human

    Our mission is keeping commerce human. It’s really about in a world where automation is changing the nature of work and we’re all buying more and more commoditized things from the same few fulfillment centers. Allowing someone to harness their creative energy and turn that creativity into a business and then connect with someone in the other part of the country or in another part of the world, that’s really special. We think there’s an ever-increasing need for that in this world.

    It’s about value. We’ve been really focused on delivering more value for our makers. Etsy really is a platform that brings buyers to sellers and that’s very valuable. We raised our commission from 3.5 to 5 percent commission which was I think is fair value for our sellers, particularly because we’re reinvesting 80 percent of that into the growth of the platform.

    Free shipping is pretty much table stakes today. Yet only about 20 percent of items have free shipping. About half of all the items on Etsy buyers say have shipping prices that are too high and yet we grew GMS at 20 percent last quarter.

    Machine Learning is Opening Up a Whole New Opportunity

    Machine learning is opening up a whole new opportunity for us to take 50 million items from two million makers and make sense of that for people. We have 37 million active buyers now and many of them come just for discovery, just to see what they can find, and that is exactly the right thing for someone out there. Our job is to create that love connection. Etsy over the past 14 years, with a large team effort, has I think done a great job.

    One thing I want to emphasize is the quality and the craftsmanship with so many of the products on Etsy. That’s something that has been such a delight for me. People like Kringle Workshops that make these incredible products. What we have been doing a better job and need to continue to do a better job of really surfacing the beautiful artisanally crafted products that are available at a really fair price. You’re not having to pay for warehousing, you’re not having to pay for all the other things that mass-produce things have to pay for, you’re buying directly from the person who made it. So it can be both beautiful, handcrafted, and well priced.

    There are 2 million sellers, 87 percent of them are women, over 90 percent are working from home or are businesses of one, who can create a global business from their garage or their living room. Etsy does provide a real sense of community for them and that’s really powerful.

    Amazon May Open New HQ in Queens Near Etsy

    We feel great about our employee value proposition and come what may. Here’s what we have going for us. We think we’ve got the best team, certainly in tech companies on the eastern seaboard. We think ours is the best and we continue to attract great talent. The reason is, first and foremost, our mission is really a meaningful important mission and that matters. Great people want to work in a place with a great mission.

    Second, our technology challenges are interesting. For example, search and using machine learning to make sense of 50 million items that don’t map to a catalog. Third, our culture is really special. We have been a company that’s authentically cared about diversity from the beginning. Over 50 percent of our executive staff are women, we have a balanced board, 50 percent male and female, and 32 percent of our engineers are female, which is twice the industry average. People who care about diversity and inclusion really want to come to work at Etsy. All of that is going for us and we’re happy to compete with whoever we need to.

    Earnings Call Comments by Etsy CEO:

    Active Buyers Grew 17 Percent

    Etsy’s growth accelerated again in the third quarter to nearly 21% on a constant-currency basis. Revenue growth exceeded 41%, fueled by the launch of our new pricing structure, and our adjusted EBITDA margins grew to nearly 23%, while we also increased our investments in the business.

    Active buyers grew 17% to 37 million worldwide. This is the fourth consecutive quarter that GMS has grown faster than active buyers, evidence that we are seeing increased buyer activity on the platform, which is a key proxy for improvement in frequency. We grew the number of active sellers by 8% and GMS per active seller is also increasing.

    Two principal levers contributed to our progress this past quarter. The first is our continued product investment, focused on improving the shopping experience on Etsy. By making it easier to find and buy the great products available for sale on Etsy, we’re doing a better job converting visits into purchases. The second lever was our new pricing structure, which enabled us to ramp up investments in marketing, shipping improvements and customer support.

    Successful Cloud Migration

    We achieved a significant milestone in our cloud migration this quarter, successfully migrating our marketplace, Etsy.com, and our mobile applications to the Google Cloud with minimal disruption to buyers and sellers. This increases our confidence that the migration will be complete by the end of 2019.

    Once fully migrated, we expect to dramatically increase the velocity of experiments and product development to iterate faster and leverage more complex search and machine learning models with the goal of rapidly innovating, improving search and ultimately driving GMS growth.

    In fact, we’re beginning to see some of those benefits today based on the systems we’ve already migrated. I’d like to thank our engineering team for their incredible work to get this – get us to this point.

     

  • Thinking About Using AI to Recruit New Staff? Amazon’s Failed Experiment Might Have You Thinking Twice

    Thinking About Using AI to Recruit New Staff? Amazon’s Failed Experiment Might Have You Thinking Twice

    Companies that are planning to use artificial intelligence for recruitment should think twice before doing that. A new report revealed that Amazon’s AI machine learned gender bias and weeded out women as potential job candidates. The machine even downgraded applicants based on the school they attended.

    A growing number of employers are using AI to boost the efficiency of their hiring process. The machine can be utilized to evaluate resumes, narrow down a list of applicants, and recommend candidates for the right post within a company. It can then pass on its findings to its live counterpart for human assessment. While AI is an effective tool for screening resumes, it has been shown to develop bias, as proven by Amazon’s experiment.

    Reuters reported that the retail giant spent several years developing an AI that would vet job applicants. The machine was trained to look at the resumes that the company received for the past ten years. But as most of these applications were from male applicants, the patterns the AI identified were strongly oriented to that sex. In short, Amazon’s AI learned gender bias.

    For instance, the AI developed a preference for terms like “captured” or “executed,” which were words commonly used by male engineers. The machine also began to penalize applications that included the word “women” or “women’s.” So describing yourself as the head of the “women’s physics club” was a strike against you.

    A source familiar with Amazon’s AI program also admitted that the machine even downgraded applicants who graduated from two all-women’s universities. The names of the universities were not specified in the report.

    The bias shown by the AI’s algorithm became noticeable a year after the project started, and Amazon admittedly tried to correct its AI. The company’s engineers initially edited the system to make it neutral to these specific words. However, there was no way of proving that the machine would not learn another way to sort candidates in a discriminatory manner.

    The project was eventually shelved in 2017 because company executives lost confidence in it. The AI also reportedly failed at providing choices for strong and effective job candidates.

    Fortunately for Amazon, the AI hiring experiment was just a trial run. The machine was never utilized by a larger group and was never used as the main recruiting agent. Nevertheless, the possibility is high that a qualified applicant was weeded out simply because she was a woman and did not think to use a masculine term like “capture.”

    [Featured image via Pexels]

  • Amazon Web Services Improves AI for New Consultancy Program

    Amazon Web Services Improves AI for New Consultancy Program

    Amazon has rolled out a consultancy program with the goal of assisting consumers with cloud machine learning. The company plans to do this by connecting clients with their own experts.

    Dubbed the Amazon ML Solutions Lab, it helps clients unfamiliar with machine learning to find beneficial and efficient uses of it for their company. Amazon plans to do this by integrating brainstorming with workshops to help clients understand machine learning by cloud better. The company will also be utilizing its experts to act as advisors to clients. Together they will work through the problems the company will face and then come up with machine learning-based resolutions. Amazon’s cloud experts will also be checking in with the company weekly to see how the project progresses.

    No two solutions will be alike though, as the ML Solutions Lab will work according to the needs of the business. For instance, Amazon could send their developers on site if the client wants a more hands-on approach or clients could go to AWS’ Seattle headquarters for training.

    How long the ML Solutions Lab will work with the company will also depend on the client. But it’s expected to last anywhere from 3 to 6 months.

    Companies that have more experience with machine learning can avail of the ML Solutions Lab Express. It’s an expedited program that runs for a month and begins with a 7-day intensive bootcamp in Amazon headquarters. However, this program is only offered to companies with machine-learning quality data since it’s geared towards feature engineering and building models swiftly.

    Amazon has not shared any details yet on how much the program will cost companies. No information has been posted on its website yet and company representatives are reportedly not responding to any requests at the moment.

    Vinayak Agarwal, Amazon’s senior product manager for AWS Deep Learning, pointed out in a blog post that the company has been immersed in machine learning for the past two decades. He also added that Amazon has pioneered innovations in areas like forecasting, logistics, supply chain optimization, personalization and fraud prevention.

    Agarwal further enjoined clients to take a closer look at the Amazon recommendations and fraud prevention ML Solutions Labsaying that they will have access to the experts that developed most of the company’s machine learning-based products and services.

    The Amazon ML Solutions Lab is being offered to customers worldwide. However, the ML Solutions Lab Express is currently exclusive to US clients.

    To get started with the Amazon ML Solutions Lab, visit https://aws.amazon.com/ml-solutions-lab.

    [Featured image via Amazon Web Services]

  • Google Unveils PAIR Initiative to Improve Relationship Between Humans and AI

    Google Unveils PAIR Initiative to Improve Relationship Between Humans and AI

    Google announced Monday a new initiative geared towards improving the relationship between humans and artificial intelligence (AI).

    The project, called People + AI Research (PAIR), will see Google researchers analyze the way humans interact with AI and the pieces of software it powers. The team, to be led by Google Brain researchers and data visualization experts Fernanda Viégas and Martin Wattenberg, will work to determine how best to utilize AI from the perspective of humans.

    “PAIR is devoted to advancing the research and design of people-centric AI systems. We’re interested in the full spectrum of human interaction with machine intelligence, from supporting engineers to understanding everyday experiences with AI,” the website for the initiative says.

    The thrust of PAIR is to have AI in a form that is more practicable to humans, or to make it “less disappointing or surprising,” as described by Wired.

    An application of this idea would be the use of AI as an aid for professionals like musicians, farmers, doctors and engineers in their vocations. Google, however, did not go into detail on how it will go about putting this idea into practice.

    The researchers also hope to help form impressions of artificial intelligence that will enable people to have realistic expectations of it.

    “One of the research questions is how do you reset a user’s expectations on the fly when they’re interacting with a virtual assistant,”  Viégas said.

    Viégas and Wattenberg, along with the 12 full-time members of the PAIR team at Google, will also be working with experts from Harvard and the Massachusetts Institute of Technology.

    PAIR, according to Google, will “ensure machine learning is inclusive, so everyone can benefit from breakthroughs in AI.” Nevertheless, as Fortune points out, there have been questions of whether tech giants like Google and Facebook are keeping AI knowledge to themselves after hiring many highly regarded researchers in different areas of AI such as deep learning.

  • Apple Shares Source Code For Machine Learning Framework at WWDC 2017

    Apple Shares Source Code For Machine Learning Framework at WWDC 2017

    Apple’s recent WWDC (Worldwide Developers Conference) saw the unheralded release of Core ML, which will reportedly make it easier for developers to come up with machine learning tools across the Apple ecosystem.

    The way this works is that developers need to convert their creations into an API that is compatible with the Core ML. They then have to load their programs into the Apple Xcode development before it can be installed on the iOS.

    Developers can use any of the following frameworks: Keras, XGBoost, LibSVM, Caffe, and scikit-learn. To make it even easier for them to load their models, Apple is allowing them to come up with their own converter.

    According to Apple, the Core ML is “a new foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType.”

    The company explained that this new machine learning tool would be “the foundation for domain-specific frameworks and functionality.”

    One of the primary advantages of the Core ML is that it speeds up the artificial intelligence on the Apple Watch, iPhone, iPad, and perhaps the soon-to-be-released Siri speaker. If it works the way that is billed, any AI task on the iPhone, for instance, would be six times quicker than Android.

    The machine learning tools supported by Apple Core ML include linear models, neural networks, and tree ensembles. The company also promised that private data by users won’t be compromised by this new endeavor. This means that developers can’t just tinker with any phone to steal private information.

    Core ML also supports:

    • Foundation for Natural Language Processing
    • Vision for Image Analysis
    • Gameplay Kit

    “Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders,” the company added.

    But Apple is reportedly not content with just releasing the Core ML. According to rumors, the company is looking to fulfill its promise of helping to build a very fast mobile platform. In fact, if the rumors are true, the company is also building a much better chip that can handle AI tasks without compromising performance.

    Though Core ML seems promising, Apple is certainly not blazing the trail when it comes to machine learning. In fact, Facebook and Google have already unveiled their own machine learning frameworks to optimize the mobile user’s experience.

    The new machine learning framework is still part of Apple’s Core Brand, which already includes Core Audio, Core Location, and Core Image as announced earlier.

  • Apple Publishes First AI Research Paper on Using Adversarial Training to Improve Realism of Synthetic Imagery

    Apple Publishes First AI Research Paper on Using Adversarial Training to Improve Realism of Synthetic Imagery

    Earlier this month Apple pledged to start publicly releasing its research on artificial intelligence. During the holiday week, Apple has released its first AI research paper detailing how its engineers and computer scientists used adversarial training to improve the typically poor quality of synthetic, computer game style images, which are frequently used to help machines learn.

    The paper’s authors are Ashish Shrivastava, a researcher in deep learning, Tomas Pfister, another deep learning scientist at Apple, Wenda Wang, Apple R&D engineer, Russ Webb, a Senior Research Engineer, Oncel Tuzel, Machine Learning Researcher and Joshua Susskind, who co-founded Emotient in 2012 and is a deep learning scientist.

    screen-shot-2016-12-27-at-10-03-16-am

    The team describes their work on improving synthetic images to improve overall machine learning:

    With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator’s output using unlabeled real data, while preserving the annotation information from the simulator.

    We developed a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a ‘self-regularization’ term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study.

    We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.

    Conclusions and Future Work

    “We have proposed Simulated+Unsupervised learning to refine a simulator’s output with unlabeled real data,” says the Apple AI Scientists. “S+U learning adds realism to the simulator and preserves the global structure and the annotations of the synthetic images. We described SimGAN, our method for S+U learning, that uses an adversarial network and demonstrated state-of-the-art results without any labeled real data.”

    They added, “In future, we intend to explore modeling the noise distribution to generate more than one refined image for each synthetic image, and investigate refining videos rather than single images.”

    View the research paper (PDF).

  • Microsoft CEO: We Are Not Anywhere Close To Achieving Artificial General Intelligence

    Microsoft CEO: We Are Not Anywhere Close To Achieving Artificial General Intelligence

    Satya Nadella, CEO of Microsoft, recently was interviewed by Ludwig Siegele of The Economist about the future of AI (artificial intelligence) at the DLD in Munich, Germany where he spoke about the need to democratize the technology so that it is part of every company and every product. Here’s an excerpt transcribed from the video interview:

    What is AI?

    The way I have defined AI in simple terms is we are trying to teach machines to learn so that they can do things that humans do, but in turn help humans. It’s augmenting what we have. We’re still in the mainframe era of it.

    There has definitely been an amazing renaissance of AI and machine learning. In the last five years there’s one particular type of AI called deep neural net that has really helped us, especially with perception, our ability to hear or see. That’s all phenomenal, but if you ask are we anywhere close to what people reference, artificial general intelligence… No. The ability to do a lot of interesting things with AI, absolutely.

    The next phase to me is how can we democratize this access? Instead of worshiping the 4, 5 or 6 companies that have a lot of AI, to actually saying that AI is everywhere in all the companies we work with, every interface, every human interaction is AI powered.

    What is the current state of AI?

    If you’re modeling the world, or actually simulating the world, that’s the current state of machine learning and AI. But if you can simulate the brain and the judgements it can make and transfer learning it can exhibit… If you can go from topic to topic, from domain to domain and learn, then you will get to AGI, or artificial general intelligence. You could say we are on our march toward that.

    The fact that we are in those early stages where we are at least being able to recognize and free text, things like keeping track of things, by modeling essentially what it knows about me and my world and my work is the stage we are at.

    Explain democratization of AI?

    Sure, 100 years from now, 50 years from now, we’ll look back at this era and say there’s been some new moral philosopher who really set the stage as to how we should make those decisions. In lieu of that though one thing that we’re doing is to say that we are creating AI in our products, we are making a set of design decisions and just like with the user interface, let’s establish a set of guidelines for tasteful AI.

    The first one is, let’s build AI that augments human capability. Let us create AI that helps create more trust in technology because of security and privacy considerations. Let us create transparency in this black box. It’s a very hard technical problem, but let’s strive toward saying how do I open up the black box for inspection?

    How do we create algorithm accountability? That’s another very hard problem because I can say I created an algorithm that learns on its own so how can I be held accountable? In reality we are. How do we make sure that no unconscious bias that the designer has is somehow making it in? Those are hard challenges that we are going to go tackle along with AI creation.

    Just like quality, in the past we’ve thought about security, quality and software engineering. I think one of the things we find is that for all of our progress with AI the quality of the software stack, to be able to ensure the things we have historically ensured in software are actually pretty weak. We have to go work on that.