WebProNews

Category: AITrends

ArtificialIntelligenceTrends

  • Facebook’s AI Bots Have Invented Their Own Language

    Facebook’s AI Bots Have Invented Their Own Language

    Would you terminate your employees if they started keeping secrets from you? Basically, Facebook did just that when two of its experimental AI bots started to converse in a language totally alien and undecipherable to researchers.

    In an effort to distinguish themselves from their rivals, Artificial Intelligence has been at the forefront of research efforts of tech companies. Facebook, for instance, is working on AI bots that have the capacity to negotiate.

    Negotiation involves complex skills like communication and reasoning, abilities that are “not inherently found in computers.” Thus, Facebook Artificial Intelligence Research (FAIR) embarked on a project that utilized reinforcement learning so that AI bots learn how to negotiate better from past experience.

    The experiment was split in two. One group of AIs was restricted to use only the English language in their dealings while the other group had no such restriction. It was the second group that churned out a very interesting result, a seemingly nonsensical dialogue between two AI bots seen below as provided by Techly:

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    Bob: you i everything else

    Alice: balls have a ball to me to me to me to me to me to me to me to me

    Bob: i . . . . . .. . . . . .

    Generally, people are both amazed and worried about the latest developments in AI, where scientist keep churning out robots that are approaching human-like intelligence. One can partly blame it on movies like Terminator and The Matrix with plots showing robots taking over the world and annihilating their former masters.

    Apparently, it is a cause of concern not only for ordinary folks but even some distinguished personalities in the business and scientific communities as well. For instance, renowned physicist Stephen Hawkings, as well as the inventor, business magnate, and Tesla CEO Elon Musk, have publicly warned of its potential dangers.

    It has been shown that letting AIs make their own language could ultimately mean more efficient communication between bots according to FastCoDesign. But Facebook is not taking any chances and has shut down the non-English-speaking AIs for now.

    [Featured Image by Pixabay]

  • AWS Outpaces Rival Cloud Platforms, Props Up Amazon’s Q2 Earnings Report

    AWS Outpaces Rival Cloud Platforms, Props Up Amazon’s Q2 Earnings Report

    Amazon Web Services’ (AWS) performance was highlighted in the recent second quarter earnings report of parent firm Amazon. In fact, it seems that the retailer’s cloud computing unit held the fort for the entire group, becoming the leading contributor to the company’s profits.

    While Amazon.com, Inc. missed its earnings estimates, AWS continued to dominate its niche, beating rivals Microsoft’s Azure and Google Cloud Platform, The Street reported. Its second quarter revenues rose by 42 percent from year-ago levels to an astounding $4.1 billion after introducing 400 new features and services and becoming the largest publicly held cloud computing provider in the process.

    AWS managed to woo a number of big corporate clients in the last 12 months, which contributed to its massive revenue increase. These include BP PLC, Ancestry.com, and the California Polytechnic State University. In addition, AWS has already entered into an agreement to provide artificial intelligence and machine learning services with Capital One Financial Corp., the American Heart Association and U.S. space agency, NASA.

    Meanwhile, parent firm Amazon’s second quarter earnings fell short of Wall Street estimates despite AWS’ massive contribution. The online retailer also warned of possible negative earnings next quarter as the company continues to allocate massive investments to ensure its future growth.

    “AWS continues to move forward on new products and win more significant enterprise business. That said, it – and public cloud more generally – is not the right answer for every organization at the moment,” was how Kate Hanaghan of IT analyst company TechMarketView explained the anticipated growth slowdown.

    Despite that, Wall Street appears comfortable with Amazon’s strategy, as the company has always shown continued profitability. Amazon share prices have climbed  40 percent since the start of 2017.

    [Featured Image by Robert Scoble/Flickr]

  • War of Words: Elon Musk and Mark Zuckerberg Spar on Importance of AI

    War of Words: Elon Musk and Mark Zuckerberg Spar on Importance of AI

    Nothing gets a geek’s dander up than a discussion of whether a Skynet-like AI will become part of our future, as seen in the beef apparently brewing between Elon Musk and Mark Zuckerberg.

    The two billionaires have opposing views with regards to artificial intelligence. While Musk is known for issuing warnings regarding the dangers of artificial intelligence, Facebook’s CEO has expressed optimism on how AI can improve people’s lives. A mindset that Tesla’s chief thinks is a pretty “limited” understanding of the topic.

    The word war apparently started after Zuckerberg conducted a Facebook Live session. As he relaxed at home and manned the grill, the tech icon answered various question, including one about AI.

    According to Zuckerberg, people who keep trying to drum up fear of AI are “really negative” and “pretty irresponsible.” He emphasized that any technology, including AI, can be used for either good or bad and that it’s up to designers and developers to be careful of what they create.

    Zuckerberg added that he has a hard time understanding those who are against the development and evolution of AI technology, saying that these people are “arguing against safer cars that aren’t going to have accidents” and “against being able to better diagnose people when they’re sick.”

    It’s safe to assume that Tesla’s boss was among those people Zuckerberg is talking about. Musk met a group of US governors earlier this month and proposed that regulations on artificial intelligence should be enacted.

    Musk explained that AI technology posed a huge risk to society, hinting at a future similar to what the Terminator movies have implied.

    “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk said then.

    Upon hearing Zuckerberg’s comments on AI, Musk hit back on Twitter, saying that he has talked to his contemporary about this. He also said that Zuckerberg’s “understanding of the subject is limited.”

    However, Zuckerberg is sticking to his guns as he once more defended his views on AI in a recent Facebook post. He reiterated his optimism about AI and the technology’s potential to improve the world.

    [Featured image via YouTube]

  • Microsoft Develops New Chip for Hololens 2; Promises No Lag, Real-Time Processing

    Microsoft Develops New Chip for Hololens 2; Promises No Lag, Real-Time Processing

    Microsoft revealed that its next-generation Hololens will pack quite a punch. Scheduled for a 2019 release, the new gadget will arrive with an all-new, more powerful artificial intelligence (AI) chip.

    Augmented reality (AR) technology has been steadily gaining ground in recent years. Among its most recent successes is in the gaming industry, where Niantic Lab’s AR mobile game Pokemon Go became a huge success, dominating the gaming charts in 2016.

    However, Microsoft is betting that, aside from gaming, the AR technology will have more practical applications in the real world. Thus, the software giant introduced the Hololens, a pair of AR smart glasses that can be programmed to assist users in a variety of tasks such as guiding tourists who are unfamiliar with a city, fixing engines, and even surgical operations using visualization tools.


    More Powerful Processor

    At the heart of the Hololens 2 is a new AI chip that will power the new device. According to Time, the coprocessor’s main task is to run deep neural networks, a process that parallels how the actual human brain works. The more powerful processor enables the new device to handle large amounts of data that one can expect to come from an ever-changing world at lightning fast speeds.

    No Lag Time

    Users of this new device will benefit greatly from its upgraded AI chip. The dedicated processor will ensure that the upcoming gadget will process data in real time without any lag, a necessary quality especially in systems that require split-second decisions like driving.

    Self-Contained System

    According to ARN, another advantage is that with the new AI chip in place, the Hololens 2 can be self-contained. Simply put, since the device has its own CPU, it is basically untethered and will not have to depend on a PC or a cloud service to function.

    This advantage is highlighted by Tirias Research’s Jim McGregor in a Seattle Times report. “For an autonomous car, you can’t afford the time to send it back to the cloud to make the decisions to avoid the crash, to avoid hitting a person.”

    [Featured Image by Microsoft]

  • Google Unveils PAIR Initiative to Improve Relationship Between Humans and AI

    Google Unveils PAIR Initiative to Improve Relationship Between Humans and AI

    Google announced Monday a new initiative geared towards improving the relationship between humans and artificial intelligence (AI).

    The project, called People + AI Research (PAIR), will see Google researchers analyze the way humans interact with AI and the pieces of software it powers. The team, to be led by Google Brain researchers and data visualization experts Fernanda Viégas and Martin Wattenberg, will work to determine how best to utilize AI from the perspective of humans.

    “PAIR is devoted to advancing the research and design of people-centric AI systems. We’re interested in the full spectrum of human interaction with machine intelligence, from supporting engineers to understanding everyday experiences with AI,” the website for the initiative says.

    The thrust of PAIR is to have AI in a form that is more practicable to humans, or to make it “less disappointing or surprising,” as described by Wired.

    An application of this idea would be the use of AI as an aid for professionals like musicians, farmers, doctors and engineers in their vocations. Google, however, did not go into detail on how it will go about putting this idea into practice.

    The researchers also hope to help form impressions of artificial intelligence that will enable people to have realistic expectations of it.

    “One of the research questions is how do you reset a user’s expectations on the fly when they’re interacting with a virtual assistant,”  Viégas said.

    Viégas and Wattenberg, along with the 12 full-time members of the PAIR team at Google, will also be working with experts from Harvard and the Massachusetts Institute of Technology.

    PAIR, according to Google, will “ensure machine learning is inclusive, so everyone can benefit from breakthroughs in AI.” Nevertheless, as Fortune points out, there have been questions of whether tech giants like Google and Facebook are keeping AI knowledge to themselves after hiring many highly regarded researchers in different areas of AI such as deep learning.

  • Can AI Replace Your Customer Service Representative?

    Can AI Replace Your Customer Service Representative?

    Businesses are quickly changing the way they operate by automating menial tasks with the help of Artificial Intelligence or AI. More companies are now using chatbots to help users accomplish tasks that would, in the past, require the assistance of a customer service representative.

    Despite the rapid progress, however, experts say that there is still a glaring need for development before machines can fully replace humans in providing customer support. In order for machines to provide full value in addressing real-life customer concerns, they must first understand human semantics.

    Chatbots as Customer Service Reps

    Using chatbots in place of actual customer service representatives is a good idea, in theory. For one, you can teach a chatbot to answer thousands of possible questions consistently. They even have the capacity to decode questions with grammatical errors, misspellings, and a certain level of colloquialism.

    This autonomy and intelligence are some of the characteristics that have made current chatbots a possibility. But while this holds a lot of promise, there are limitations to machine learning that prevent AIs from fully learning semantics.

    A simple question can have several different interpretations depending on tone and emphasis, and teaching all of that to a bot can be tedious and time-consuming. To provide users with adequate responses, bots need extensive chat log histories that can train them to understand real-life scenarios.

    Companies who want to deploy bots at the foreground of customer support either have to input all of the possible data manually or do away with a bot that doesn’t have sufficient input.

    This is the very reason why we hear stories of bots who’ve gone rogue minutes after deployment. Without access to properly labeled and extensive chat logs, bots don’t have the full capacity to pair questions with their underlying intent. In that sense, they only have semi-autonomy in dealing with customer concerns.

    AIs Working in Conjunction With Real Life Customer Service Reps

    Today’s AIs have the capacity to understand basic questions and provide entry-level responses. Anything more complex would still require the understanding of a living and breathing customer service representative. This slight limitation, however, doesn’t mean bots can no longer provide customer support. Many brands and businesses are already making significant investments to integrate AI into their customer service operations.

    The real and imminent possibility at the moment is to deploy AIs and machines to work with people on the front lines of customer support. This advancement on its own can make customer support more accessible and decrease call traffic for most support hotlines.

    Once developers find a way to fully optimize AI in handling real-life scenarios without going rogue, it’s quite certain that using bots as customer service representatives is in our near future. For now, studies and further work need to be done to ascertain if bots can provide customers with a satisfactory resolution to their complex concerns.

    The hype surrounding AI doesn’t mean humans will be obsolete in the customer service sector. This just means businesses can allocate more of their resources and manpower to more demanding aspects of business operation.

  • Facebook Uses AI as a Weapon Against Terrorism

    Facebook Uses AI as a Weapon Against Terrorism

    In the wake of recent terror attacks, Facebook has issued a statement through their newsroom on countering radicalization. The social media platform is immensely popular, with worldwide users reaching 1.94 billion each month. It also can’t be denied that even terrorist groups have easy access to their website. As a result, Facebook has been faced with criticisms over their lack of efforts to eradicate terrorist propaganda from their pages.

    Throughout the first half of 2017, there have been 571 terrorist attacks recorded, which resulted in 3,924 fatalities around the world. Infamous perpetrator groups involved in these separate attacks include Al-Qaeda, ISIS, Taliban, Boko Haram, PKK, and other unknown entities. Just last year, it was discovered that these terror groups have also invaded the web with their illegal activities. They were using Facebook to create closed groups to buy and sell weapons and make secure payment through Messenger.

    Facebook finally decided that they’ve had enough, deploying one of their best soldiers to stop terrorists from using their website—artificial intelligence. As explained by the team, the technology is similar to that used to block child pornography. The algorithm also aims to eradicate hate speech and the efforts of jihadist recruiters.

    Currently, the social media platform’s AI algorithms can counter-terrorism in the following ways:

    Trace Terrorist Content: Although it is still in the experimental stage, Facebook aims to perfect its “language understanding” algorithm. This can help identify terrorist content through text-based signals.

    Find Terrorist Clusters: The AI is designed to look for terrorism-associated pages, posts, groups, personal accounts, and other materials that support terrorism content. It can also determine whether an account that has been disabled for terrorism shares the same attributes with an active account.

    Image Matching: The algorithm also helps the social media site to recognize images or videos that have previously been flagged. Doing so will prevent the upload and sharing of any terrorist propaganda.

    Furthermore, Facebook is said to be currently developing another algorithm that will help them investigate terrorist activity across other platforms. WhatsApp and Instagram, which belong to the family of Facebook apps, are also popular among terrorists.

    To strengthen their defenses, the company has admitted to using human efforts as well. From academic experts on counterterrorism, former law enforcement agents and analysts, former prosecutors, to engineers, Facebook has employed a lot of these professionals to continue the fight against terrorism the best way they know how.

  • Facebook is Now Teaching Bots How to Negotiate…and Lie

    Facebook is Now Teaching Bots How to Negotiate…and Lie

    Facebook has built a massive pool of bots that reside in their messenger app. These bots were trained to answer basic questions and perform basic commands based on text to dialogue recognition. While the bots held a lot of promise, it was obvious that they still had a lot to learn before they can fluently converse with humans.

    After recognizing the flaws with the initial roll out, the company reinstated the menu option and went back to the drawing board. Facebook then played around with a combination of native language learning and machine learning that are more commonly used in the gaming industry. As a result, they were able to develop an AI that can negotiate on behalf of and with humans.

    How did Facebook achieve such a feat? They built two bots and presented them with several objects: two books, one hat, and three balls. Each bot was programmed with a hidden preference, and their goal was to compromise with each other until they reach a point where they can both walk away with what they want.

    This experiment was conducted by the Facebook Artificial Intelligence Research (FAIR) group in collaboration with the Georgia Institute of Technology. The group now claims that they have the code that will teach bots how to negotiate.

    While this technology holds a lot of promise, there’s no denying that there is still a lot of room for improvements since the experiment is only at its initial stages. According to some experts, while the code did teach bots how to negotiate, it also taught them to lie (by putting false emphasis on an object) in order to achieve their goal. If a business owner decides to integrate these cunning bots into their current business model, they are bound to encounter some problems.

    It also showed signs of excessive willingness to concede just to achieve a decent gain, which may result in bad business decisions.

    Despite its flaws, the bots built for this exercise showcased extensive conversational skills that were far superior to any bots in operation today. They showed a capability to construct complex sentences and form a deep understanding of the messages being delivered to them.

    The researchers at FAIR said that they will continue to improve the bot’s ability to form more competitive reasoning strategies while further broadening their understanding of the native language. This means that we are bound to see more eloquent bots capable of negotiating deals in the future.

  • Apple Shares Source Code For Machine Learning Framework at WWDC 2017

    Apple Shares Source Code For Machine Learning Framework at WWDC 2017

    Apple’s recent WWDC (Worldwide Developers Conference) saw the unheralded release of Core ML, which will reportedly make it easier for developers to come up with machine learning tools across the Apple ecosystem.

    The way this works is that developers need to convert their creations into an API that is compatible with the Core ML. They then have to load their programs into the Apple Xcode development before it can be installed on the iOS.

    Developers can use any of the following frameworks: Keras, XGBoost, LibSVM, Caffe, and scikit-learn. To make it even easier for them to load their models, Apple is allowing them to come up with their own converter.

    According to Apple, the Core ML is “a new foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType.”

    The company explained that this new machine learning tool would be “the foundation for domain-specific frameworks and functionality.”

    One of the primary advantages of the Core ML is that it speeds up the artificial intelligence on the Apple Watch, iPhone, iPad, and perhaps the soon-to-be-released Siri speaker. If it works the way that is billed, any AI task on the iPhone, for instance, would be six times quicker than Android.

    The machine learning tools supported by Apple Core ML include linear models, neural networks, and tree ensembles. The company also promised that private data by users won’t be compromised by this new endeavor. This means that developers can’t just tinker with any phone to steal private information.

    Core ML also supports:

    • Foundation for Natural Language Processing
    • Vision for Image Analysis
    • Gameplay Kit

    “Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders,” the company added.

    But Apple is reportedly not content with just releasing the Core ML. According to rumors, the company is looking to fulfill its promise of helping to build a very fast mobile platform. In fact, if the rumors are true, the company is also building a much better chip that can handle AI tasks without compromising performance.

    Though Core ML seems promising, Apple is certainly not blazing the trail when it comes to machine learning. In fact, Facebook and Google have already unveiled their own machine learning frameworks to optimize the mobile user’s experience.

    The new machine learning framework is still part of Apple’s Core Brand, which already includes Core Audio, Core Location, and Core Image as announced earlier.

  • How Will AI Affect Project Management in the Future?

    How Will AI Affect Project Management in the Future?

    Companies are just discovering the potential of AI to unburden project managers, who are already spending too much time on paperwork or management tasks rather than crafting strategies and future plans on the macro level.

    The average project managers today have so many responsibilities that it’s any wonder they can get things done other than filing and signing documents, making sure everybody follows the schedule, crafting budgets, and other administrative duties.

    This routine has been maintained for so long that companies take the delay as par for the course. In public organizations, a large bureaucracy can add to the Gordian knot to the point wherein a project submitted on time is now sometimes greeted with surprise.

    According to the Harvard Business Review, over half of the project manager’s time is wasted on administrative tasks. In fact, if they have their way, almost nine in 10 of the survey respondents replied that they could benefit from AI support so they wouldn’t have to focus so much on administrative tasks.

    The good news is that there are already AI tools on the market today that can help project managers unclog their desks. Kono.ai, for example, has an app that can work as an effective smart assistant. The Monte Carlo app, meanwhile, can submit a risk analysis through probabilities. Admittedly, developers are just scratching the surface of what AI can really do for project managers.

    Scott Middleton, the CEO of Stratejos, a smart assistant software maker, says that despite skepticism by employees about AI stealing their jobs, the future of machine-learning in relation to business tasks is bright.

    “AI isn’t to be feared,” he explained. “It may even be your best team member, especially for project managers. AI for project management is on the rise, and the way things are going, it’s going to help teams make smarter decisions and move faster.”

    A survey from software developer Atlassian revealed that more than 70% of those surveyed claimed that half of their tasks can be done by robots or AI tech. Right now, almost 40% believe that they are already utilizing AI in their office.

    Middleton predicted that developers—and companies in general—would place more focus on smart assistants for project managers to relieve them of some of their more menial tasks. In the future, the amount of complicated tasks assumed by robots will have increased.

    But the AIs of today are severely limited in scope. For instance, they still rely on data collected and input by humans. These robots are not self-updating, nor do they make corrections automatically if they spot a mistake.

    That will change in the future, of course.

    To allay the fears of middle managers, as well as the rank and file, it seems unlikely that machines will take over whole organizations because they lack the capacity for creative thinking in solving complex problems.

    What they do, however, is cut back the amount of errors committed in the implementation of the project until its submission. As the technology advances, they will become invaluable tools in reporting and monitoring.

    Instead of project managers defining the scope of the work, assigning tasks to the teams, analyzing the data, adjusting timetables, documenting the process, predicting outcomes, and gauging the risks, these can all be done through machine-learning.

    This is the reason why companies are well advised to start thinking about investing in AI as an assistive tool in everyday tasks. In the same vein, instead of being viewed as a threat to human jobs, project managers should teach themselves to better harness today’s advances in machine-learning to come up with solutions to their core project problems.

  • Google Lens Creates Buzz After Making Bold Promises at I/O Keynote

    Google Lens Creates Buzz After Making Bold Promises at I/O Keynote

    The Google Lens ended up overshadowing the recent I/O Keynote when it was first introduced by the American tech company.

    However, there’s a good reason why it stole the show, considering that the new update has huge potential in changing the present augmented reality landscape.

    According to the company, the technology utilizes machine-learning to classify objects in the real world that are viewed through the user’s phone camera. In addition, it also has the capability to analyze and interpret the objects with the end objective of anticipating what it is the user intends to do.

    Google Lens can even automatically connect to a Wi-Fi router using optical character recognition for the username and password. The user can also read reviews of the restaurant in a pinch with the use of this new feature.

    Google CEO Sunda Pichai said during the conference, “All of Google was built because we started understanding text and web pages. So the fact that computers can understand images and videos has profound implications for our core mission.”

    The Google Lens will become part of the update for Google Photos and for the Android smart assistant in the future. Unfortunately, it’s not yet available for commercial release.

    The potential of this new update should not be underestimated as it will change the way people use the search box and mobile devices. Instead of going to Google Search to type their queries, Google Lens will exploit visual media to narrow down the relevant results. It will also make use of the calendar, camera, and other native apps to provide info.

    Voice search allows the user to skip typing on the search box, but one of its problems has been accuracy. Google Lens, in theory, won’t have such issues. Using the camera will allow the user to identify the type of chair or its manufacturer, for example. Once the technology is perfected, the user can then ask Google Assistant to order the same product online.

    Google Lens will also have real-world applications that would be invaluable in bridging the language divide. Facebook is working on its machine-learning to hone its translator code in the platform, but Google’s AI may take it one step further.

    For instance, instead of copy-pasting the words or sentences that need to be translated, the user will just point the mobile phone’s camera toward the text, and if Google follows through with its promise, you should get the translation results in a snap.

  • Want a Digital Friend? Facebook ParlAI Could Be the Answer

    Want a Digital Friend? Facebook ParlAI Could Be the Answer

    Facebook has rolled out ParlAI (pärˌlē), which is expected to revolutionize conversational AI systems across different platforms.

    Yann LeCun, Facebook Artificial Intelligence Researchers (FAIR) head, explained, “Ultimately one of the objectives of this is to have your own digital friend, your virtual assistant that is basically customized for you and under your control.”

    For ParlAI, the FAIR team worked closely with the people who developed Facebook M, the social media company’s smart assistant for its messaging service. The aim is to make chatbots more responsive, articulate, and eventually, more efficient.

    Researchers will also be able to customize ParlAI to be used in different technologies as the source code will soon be released by Facebook.

    The system itself boasts of 20 built-in languages. Once perfected, there won’t be any questions out there that won’t be answered by the chatbots. Initially, examples of Q&A from Microsoft, Stanford, and Facebook are incorporated in the data sets.

    They could also extrapolate the meaning in the question, taking into account the nuances of each language. The trick is to create an algorithm that is capable of machine-learning the complexities of the language, and to adapt accordingly.

    While this new development is not exactly a breakthrough in natural language, ParlAI is nevertheless an important step toward better communication between chatbots and humans.

    Jason Weston, researcher at FAIR, said that Facebook’s ParlAI is not exactly new technology, as researchers before have already made advances in question-answering systems. However, any progress they made were ignored because first, they were too narrow, and second, they were micro-focused on a single task.

    It’s also a case of “once burned, twice shy” as industries have been promised before by researchers claiming to have the new benchmark in conversational models, only to be disappointed with the results. As it stands, there’s just no incentive for other researchers to piggyback on these benchmarks to add value to the technology.

    Weston explained that while ParlAI does not claim to be the bridge that connects all of these separate data sets, it does aim to leapfrog dialog reinforcement involving chatbots. Think of this machine-learning technology as a baby—the more people talk to it, the more it learns and anticipates. However, the technology will take time.

    “That will take a while before those things are general enough that they can take care of all the things of a human assistant,” LeCun said, before adding, “We’re talking decades.”

  • Facebook Aims to Break Down Language Barrier with New Translator AI

    Facebook Aims to Break Down Language Barrier with New Translator AI

    True to its goal of connecting people all over the globe, Facebook has been trying to crack the language barrier for its billions of users. On Tuesday, its AI research team revealed a method that will drastically improve the way its 1.8 billion-strong members understand each other on the social media platform.

    The Facebook Artificial Intelligence Research (FAIR) team reported a breakthrough in the use of novel convolutional neural network (CNN) as opposed to the recurrent neural networks (RNN) being used by cross platforms to translate a particular language.

    Research results found that the new language model is “nine times the speed” of current RNN models. The FAIR team admitted that they have just scratched the surface, as it potentially can be sped up some more using other distillation methods.

    For the longest time, the RNN has outperformed the CNN in terms of language interpretation, but it is sorely limited in the way it processes information. As the Facebook engineers explained it, the language works “in a strict left-to-right or right-to-left order, one word at a time.” This means that the program will have to wait for one word to be translated before moving on to the next.

    The problem with the RNN is that the GPU, the default hardware that powers modern machine AI, is highly parallel. This is more compatible with the CNN, which interprets data in a hierarchical manner and can make translations simultaneously by factoring in the correlation between each word in a sentence.

    Christopher Manning, professor of Computer Science and Linguistics at Stanford University, described Facebook’s announcement as an “impressive achievement.” The professor, who works with machine learning, said the breakthrough allows the social media company to speed up existing translation models.

    “You can have parallel computation on different parts of a sentence. You don’t have to push things along word by word,” he explained.

    Indeed, the research concluded that CNN language is more efficient on account of its multi-hop function, which means it goes back to the original sentence time and again to translate multiple words. It also potentially has the capability to focus on two separate facts at the same time and interpret them within a larger context, which will help break down complex sentences.

    Facebook is going to share the source code on GitHub so other engineers can customize the new language translator for further efficiency and accuracy. More importantly, researchers can use the code to break down the language barrier for multiple platforms, not just on Facebook or its affiliated sites.  

  • Echo Show Highlights Amazon’s Dominance in Home AI Technology

    Echo Show Highlights Amazon’s Dominance in Home AI Technology

    Amazon further distanced itself from Google with the launch of Echo Show, a voice-activated home smart assistant with a 7-inch touchscreen display.

    With the 7-inch screen, homeowners will be able to sing along through the lyrics on display, monitor security cameras, read shopping lists, keep up with the news, or view photos and videos. They can also make hands-free video calls with paired devices.

    Smart home devices are geared to become the next battleground for tech companies, with the industry estimated to hit almost $200 billion by 2021. But while Google, Apple, and the rest of the pack are still trying to catch up, Amazon has seemingly asserted its dominance with the announcement of its latest product.

    Google’s answer to Alexa, dubbed as Home, was released a full two years after Amazon introduced its smart home devices. Microsoft’s own version was only released this week, while Apple has no timetable for their release.

    Martin Utreras, vice president of forecasting at business digital data miner eMarketer, said that Alexa-powered Echo devices already corner 70% of the market compared to Google’s Home, which controls 23.8%. Amazon managed to do this by opening the ecosystem to third-party developers such as Ford, GE, and LG with their smart cars and appliances being able to link up with Alexa.

    “Consumers are becoming increasingly comfortable with the technology, which is driving engagement,” the forecaster said. “As prices decrease and functionality increases, consumers are finding more reasons to adopt these devices.”

    The Amazon Echo Show is seen to address previous gripes about its functionality. Pictures really do paint a thousand words, as users found it difficult to receive information provided by Alexa. Search results, for instance, are much easier to absorb when you read them instead of listening to each one being dictated to you.

    However, surveys have shown that customers are under no illusion about the capacity of the smart home AI assistants to replace PCs, tablets, or mobile phones. In fact, according to the survey, homeowners don’t really want to see a web browser in the Amazon Echo Show or any other similar devices with a touchscreen display.

    Instead, they want easy access to the clock, calendar, news headlines, weather, music, or entertainment, which only serves to affirm that homeowners want the innovation to enhance their experience in performing any voice-activated task.

    Amazon’s Echo Show will be released in the U.S. on June 28, 2017, with a price tag of $229.99. Shipping will be free.

  • How Facebook AI Chatbots Benefit E-Commerce Businesses

    How Facebook AI Chatbots Benefit E-Commerce Businesses

    Facebook continues to develop AI chatbots to aid e-commerce and retail companies grow their businesses at a minimum cost to tech innovations.

    Chatbots are certainly not new. People who dial 1-800 numbers have talked to these smart assistants at one point or another. However, innovations in this technology have made chatbots more interactive and responsive.

    Then Facebook came in and changed the layout of the land.

    David Marcus, Vice President of Messaging Products at Facebook, revealed that there are now 11,000 chatbots on Facebook reaching almost a billion users. Facebook M, the company’s text-based virtual assistant feature, has been modified to make the AI better.

    “M will make automated suggestions based on chat intent,” he wrote last month. “These suggestions will help you get more from your Messenger experience by shortening the distance between what you need to do and getting it done.”

    Facebook also introduced more changes to the Messenger which will further aid small businesses in improving customer experience and reaching their target clients. Among the changes are:

    • Smart Replies for Pages – This feature allows small businesses to interact with their customers even if they are too busy managing their day-to-day operations. They can also customize the API to ensure predetermined answers to the most frequently asked questions.
    • Hand-Over Protocol – This feature will allow businesses to manage and even expand their services. With the help of developers, they can create a bot that handles customer service, or another bot which handles orders.
    • Parametric Messenger Codes – This allows businesses to create quick response codes to compartmentalize services. In the future, this bot can be used to utilize the mobile phone camera instead of the price scanner.

    Meanwhile, the ability to accept bills payment without bouncing users to an external website has already been rolled out by Facebook last year.

    Facebook Messenger is free to use, along with the reach of the social media giant (with nearly 2 billion accounts), and that makes it a perfect option for e-commerce businesses. Mark Zuckerberg and the rest of the company are even making it easier for small businesses to embed the conversational tool into their websites.

    The potential for Facebook AI chatbots in e-commerce is huge. For instance, they can be customized to fit the goals of the particular business, whether it means promoting the brand, reaching targeted consumers, raising awareness during a product launch, or generating automatic replies to queries. All of these will hopefully influence the decision of the potential customer to order a product, thereby successfully affecting retail conversion.

    After the conversion, customer support can also be delegated to these smart assistants so businesses don’t have to hire new people to accept complaints, answer queries, or render post-purchase services.

    Facebook AI chatbots also extend beyond the business-customer dynamics. In forging partnerships with other businesses, for instance, these tools can serve as the “advanced party” and give the potential investor the necessary due diligence even before making initial contact.

    The success of Siri or Amazon Alexa to assist users in their daily tasks highlights the potentials of AI chatbots in e-commerce. And this will only grow as developers perfect the technology and more people recognize their importance. A study by Oracle last year revealed that 80% of the 800 businesses that participated in the survey believe that they will use AI chatbots by 2020.

    A similar study by Gartner, an IT research and advisory company, forecast that nearly 90% of interactions between customers and businesses in three years’ time will be handled by AI chatbots.

  • Microsoft and LinkedIn Ready to Challenge Salesforce CRM

    Microsoft and LinkedIn Ready to Challenge Salesforce CRM

    Since Microsoft’s $26 billion acquisition of LinkedIn three days after Christmas 2016, little has been heard from what would be Microsoft Dynamic’s bold upgrade to the CRM industry. But the picture has become much more clear since Microsoft CEO Satya Nadella recently gave Reuters the details of upgrades to its sales software that integrates data from LinkedIn.

    As Reuters describes: The new features will comb through a salesperson’s email, calendar and LinkedIn relationships to help gauge how warm their relationship is with a potential customer. The system will recommend ways to save an at-risk deal, like calling in a co-worker who is connected to the potential customer on LinkedIn.

    The enhancements, which will be available this summer, will require Microsoft Dynamics customers to also be LinkedIn customers.

    Microsoft is a small player in sales software. According to research firm Gartner, the company ranks fourth – far behind Salesforce.com and other rivals Oracle Corp and SAP – with just 4.3% of the market in 2015, the most recent year for which figures are available.

    Nadella, long known as a champion for the democratization of artificial intelligence (AI), said it would be a key component to Dynamics: “I want to be able to democratize AI so that any customer using these products is able to, in fact, take their own data and load it into AI for themselves.”

    Microsoft’s “third cloud” is Nadella’s term for Dynamics which caters to the fields such as sales and finance. With Office 365 (work productivity and flow, email, etc.) and Azure (computing and databases) as the first two clouds, Nadella all Microsoft products utilizing a common set of business data that can be mined for new insights with artificial intelligence

    “I think that’s the only way to long-term change this game, because if all we did was replace somebody else’s (sales), or (finance) application, that’s of no value, quite frankly,” he told Reuters.

    With this news, it’s no surprise that Linked also announced both a new ad targeting platform (Matched Audiences) and that LinkedIn now has 500 million users.

  • IBM Puts Watson into the Hands of Marketers

    IBM has announced new cognitive capabilities for IBM Watson Marketing Insights, IBM’s cloud-based marketing and customer analytics platform that uses the Watson’s artificial intelligence computer system to better examine and predict customer behavior.

    Audience insights is a cognitive feature that reveals key predictors in customer data based on their interactions with the brand across channels including email, digital, social media and in-store, as well as customer attributes. This data is continuously updated, revealing new audience profiles and customer segments as the relative importance of their behavior predictors changes.

    These insights are delivered to the marketing team via a visual dashboard that includes details of the context and reasoning behind the findings. With this information marketers can proactively target campaigns designed specifically to engage this group with a relevant offer and retain their loyalty.

    “We understand that a customer’s journey has many touch points, and our clients want to make this journey seamless,” said Maria Winans, Chief Marketing Officer, IBM Watson Customer Engagement. “While every customer is different, they all have one thing in common- they are interacting with brands across multiple channels. With these new cognitive capabilities, we give marketers the audience insights they need to strengthen customer engagement and deliver better performing campaigns.”

  • The Cognitive Era is the Next Societal Revolution That Will Change the World

    The Cognitive Era is the Next Societal Revolution That Will Change the World

    “The transformational nature of artificial intelligence requires new metrics of success for our profession,” says Guru Banavar who is responsible for advancing the next generation of cognitive technologies and solutions with IBM’s global scientific ecosystem including academia, government agencies and other partners. “It is no longer enough to advance the science of AI and the engineering of AI-based systems. We now shoulder the added burden of ensuring these technologies are developed, deployed and adopted in responsible, ethical and enduring ways.”

    IBM is at the cutting edge of the practical integration of artificial intelligence into real-world solutions as evidenced by IBM Watson’s recent integration with H&R Block software to improve tax deduction possibilities for the average consumer. Now that’s something that everybody can relate to!

    Dr. Banavar recently delivered the 2017 Turing Lecture, a prestigious annual lecture co-hosted by the British Computer Society (BCS) and the Institution of Engineering and Technology (IET). ”

    “Most of the really exciting work going on in AI today is not about this at all (referring to the movie Morgan which focused on artificial general intelligence)”, noted Dr. Banavar. “It’s not about machines that look and talk and feel like humans. It’s not about machines that work like humans, but its about machines that work with humans.” He says that although this is a rather fine distinction, it’s a really important distinction.

    The Cognitive Era

    “There is a big revolution going on and in my mind is of the same magnitude of the Industrial Revolution,” says Dr. Banavar. “Every time one of these revolutions has happened we have seen tremendous changes in society, in the economy’s of the world and in all our lives. I believe we are at the beginning of yet another such revolution. I call that the Cognitive Era.”

    “I think that we as human beings are now getting overwhelmed with respect to our cognitive capabilities,” commented Dr. Banavar. “Just trying to understand all of the data around us, all of the knowledge around us and trying make the right decisions about our daily lives, about our jobs, is getting really hard. We need to augment our cognition with the cognition of machines.”

    Dr. Banavar gave the example that we all know that doctors can often be years behind the latest research and data. “What if your doctor had the benefit of a machine that could help do this kind of analysis before they make their final decision?”

    View Dr. Banavar’s lecuture in its entirety starting at the 18:10 mark:

  • Apple Publishes First AI Research Paper on Using Adversarial Training to Improve Realism of Synthetic Imagery

    Apple Publishes First AI Research Paper on Using Adversarial Training to Improve Realism of Synthetic Imagery

    Earlier this month Apple pledged to start publicly releasing its research on artificial intelligence. During the holiday week, Apple has released its first AI research paper detailing how its engineers and computer scientists used adversarial training to improve the typically poor quality of synthetic, computer game style images, which are frequently used to help machines learn.

    The paper’s authors are Ashish Shrivastava, a researcher in deep learning, Tomas Pfister, another deep learning scientist at Apple, Wenda Wang, Apple R&D engineer, Russ Webb, a Senior Research Engineer, Oncel Tuzel, Machine Learning Researcher and Joshua Susskind, who co-founded Emotient in 2012 and is a deep learning scientist.

    screen-shot-2016-12-27-at-10-03-16-am

    The team describes their work on improving synthetic images to improve overall machine learning:

    With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator’s output using unlabeled real data, while preserving the annotation information from the simulator.

    We developed a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a ‘self-regularization’ term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study.

    We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.

    Conclusions and Future Work

    “We have proposed Simulated+Unsupervised learning to refine a simulator’s output with unlabeled real data,” says the Apple AI Scientists. “S+U learning adds realism to the simulator and preserves the global structure and the annotations of the synthetic images. We described SimGAN, our method for S+U learning, that uses an adversarial network and demonstrated state-of-the-art results without any labeled real data.”

    They added, “In future, we intend to explore modeling the noise distribution to generate more than one refined image for each synthetic image, and investigate refining videos rather than single images.”

    View the research paper (PDF).

  • IBM Watson Brings AI to H&R Block Tax Preparation

    IBM Watson Brings AI to H&R Block Tax Preparation

    IBM announced a partnership with H&R block to use their artificial intelligent platform IBM Watson to radically improve tax preparation. “Introducing the biggest advancement in tax preparation technology,” exclaimed IBM in an announcement video. “Say hello to the partnership between H&R Block and IBM Watson. Imagine being able to understand all 74,000 pages of the US tax code along with thousands of yearly tax law changes and other information, plus locks deep insights built from over 600 million data points.”

    “Imagine being able to understand all that information,” noted IBM. “Watson will learn from it and help your tax pro find every credit, deduction and opportunity available. The one of a kind partnership between H&R Block and Watson is revolutionizing the way people file taxes.”

    H&R Block is marketing the new AI integration as “the future of tax prep” as seen in their new Google ad:

    “H&R Block is revolutionizing the tax filing experience,” stated Bill Cobb, President and Chief Executive Officer of H&R Block. “By combining the human expertise, knowledge and judgement of our tax pros with the cutting edge cognitive computing power of Watson, we are creating a future where every last deduction and credit can be found.”

    “Tax preparation is a perfect use for Watson,” noted David Kenny, Senior Vice President of IBM Watson. “Just like Watson is already revolutionizing other industries like healthcare and education, here H&R Block with Watson is learning to process incredible amounts of information, helping create tailored solutions for H&R Block customers.”

    IBM expects Watson to learn through H&R Blocks millions of unique tax filings how to maximize credits and deductions for every customer, elimination inconsistencies caused by human tax preparation experts. The more information Watson receives says Kenny, the smarter Watson gets.

    “This is a major shift in how man and machine work together to help us in our everyday lives,” says Kenny.

  • Microsoft CEO: We Are Not Anywhere Close To Achieving Artificial General Intelligence

    Microsoft CEO: We Are Not Anywhere Close To Achieving Artificial General Intelligence

    Satya Nadella, CEO of Microsoft, recently was interviewed by Ludwig Siegele of The Economist about the future of AI (artificial intelligence) at the DLD in Munich, Germany where he spoke about the need to democratize the technology so that it is part of every company and every product. Here’s an excerpt transcribed from the video interview:

    What is AI?

    The way I have defined AI in simple terms is we are trying to teach machines to learn so that they can do things that humans do, but in turn help humans. It’s augmenting what we have. We’re still in the mainframe era of it.

    There has definitely been an amazing renaissance of AI and machine learning. In the last five years there’s one particular type of AI called deep neural net that has really helped us, especially with perception, our ability to hear or see. That’s all phenomenal, but if you ask are we anywhere close to what people reference, artificial general intelligence… No. The ability to do a lot of interesting things with AI, absolutely.

    The next phase to me is how can we democratize this access? Instead of worshiping the 4, 5 or 6 companies that have a lot of AI, to actually saying that AI is everywhere in all the companies we work with, every interface, every human interaction is AI powered.

    What is the current state of AI?

    If you’re modeling the world, or actually simulating the world, that’s the current state of machine learning and AI. But if you can simulate the brain and the judgements it can make and transfer learning it can exhibit… If you can go from topic to topic, from domain to domain and learn, then you will get to AGI, or artificial general intelligence. You could say we are on our march toward that.

    The fact that we are in those early stages where we are at least being able to recognize and free text, things like keeping track of things, by modeling essentially what it knows about me and my world and my work is the stage we are at.

    Explain democratization of AI?

    Sure, 100 years from now, 50 years from now, we’ll look back at this era and say there’s been some new moral philosopher who really set the stage as to how we should make those decisions. In lieu of that though one thing that we’re doing is to say that we are creating AI in our products, we are making a set of design decisions and just like with the user interface, let’s establish a set of guidelines for tasteful AI.

    The first one is, let’s build AI that augments human capability. Let us create AI that helps create more trust in technology because of security and privacy considerations. Let us create transparency in this black box. It’s a very hard technical problem, but let’s strive toward saying how do I open up the black box for inspection?

    How do we create algorithm accountability? That’s another very hard problem because I can say I created an algorithm that learns on its own so how can I be held accountable? In reality we are. How do we make sure that no unconscious bias that the designer has is somehow making it in? Those are hard challenges that we are going to go tackle along with AI creation.

    Just like quality, in the past we’ve thought about security, quality and software engineering. I think one of the things we find is that for all of our progress with AI the quality of the software stack, to be able to ensure the things we have historically ensured in software are actually pretty weak. We have to go work on that.